Leer en español
Ler em português
“I hope you’ll keep in mind that economic forecasting is far from a perfect science. If recent history’s any guide, the experts have some explaining to do about what they told us had to happen but never did.” —Ronald Reagan, January 21, 1984
“Since the destruction of the Second Temple, prophecy has become the lot of fools.” —Hebrew expression1
From Herbert Hoover’s happy prediction that prosperity was just around the corner to Ronald Reagan’s steadfast promise in October 1980 that the fiscal 1984 budget would show a $30 billion surplus, economic forecasts have been wrong. Often, indeed, they have been in wild disagreement with one another. In December 1981, for example, when the winds of recession were beginning to blow strongly, the 44 leading economic forecasting services covered by Robert Eggert’s Blue Chip Economic Indicators, a monthly publication, showed predictions for real GNP growth in 1982 that ranged all the way from + 4.0% over 1981 by the most optimistic forecaster to –1.7% change by the gloomiest. Their expectations for pretax profits ranged all the way from –19.9% to +19.0%!
This is all very amusing, but it suggests some serious questions. Given the frailties of the process, can forecasts have any value for business executives who have to make decisions about how much inventory to carry, how aggressively to price their products, how vigorously to resist wage demands, and when and how to finance expansion? If not, whom can they believe?
And even if they can answer these difficult questions, why should they bother with economic outlooks in the first place? Is there a cost for ignoring them and listening only to the signals from their own companies and their own industries?
We argue that economic forecasts deserve to be taken seriously, not necessarily because they promise to be accurate but because they are so much more useful than having no forecasts at all. We do not say that managers should listen to all forecasts or to all forecasters; that is the sure road to total confusion. Rather, we say that forecasts properly used and understood will lead to better business decisions than forecasts ignored or naively used.
Better Than Nothing?
A respectable body of thought argues that expectations are so rapidly embodied in decisions that no one can make forecasts better than those implicit in the marketplace itself. This viewpoint has enjoyed most prominence in markets where pricing is based on expectations, such as the stock and bond markets and the markets for commodity futures.
Some observers argue also that even the pricing of nonfinancial goods and services, including labor, reflects anticipation of conditions to such an extent that forecasting the next change—in either direction or of any magnitude—is likely to be hazardous. The best forecast may simply be that tomorrow will look like today or like today’s trends extrapolated to tomorrow.
Note that the proponents of this position do not argue that these implicit forecasts in the marketplace will be correct. They know better. They do say, however, that these forecasts will be less wrong than the predictions of individuals who think they know more than the millions of forecasts already imbedded in the market by the real-time decisions of participants in it.
While the financial markets offer enough evidence to justify taking this view seriously, it is much more controversial when we look at it in terms of the economy. Unlike the movement of actively traded financial assets (which may indeed have the features of a random walk), the swings in real GNP, inflation, unemployment, industrial production, and earnings are part of a process, a process in which one stage leads inexorably to the next and in which decisions once made are difficult to reverse. Only the timing of the process is hard to predict; its fundamental character is by no means obscure.
Hence the expectation that tomorrow’s figures will be the same as today’s, or an extrapolation of today’s trends, is certain to be wrong. In fact, it is likely to be more wrong than a careful prediction based on some understanding of the process that leads the business cycle to evolve from today to tomorrow.
Suppose it is 1972 and you are sitting and contemplating the outlook for 1973 or 1974. Since 1952 the annual change in real GNP has averaged 3.4%, with a standard deviation of 2.3 percentage points. Your first problem is the 65% probability that real growth in 1973 could be between 1.7% and 5.7%—a spread so wide relative to the mean growth rate that your only rational expectation would be that anything could happen in 1973 and 1974.
As we know, that is precisely what did occur: real GNP in 1973 was 0.6% below 1972, while in 1974 it fell 1.2% below 1973. These changes came even outside the band of one standard deviation around the mean. If you had been playing around with an inflation forecast on the same basis, your estimates would have been catastrophically off base.
We can find a more elegant demonstration of the weakness of extrapolative techniques in an analysis of forecast accuracy by Stephen McNees and John Ries, economists at the Federal Reserve Bank of Boston. McNees and Ries studied the performance of a group of leading forecasting organizations as measured by their quarterly forecasts of principal economic variables from the first quarter of 1971 through the second quarter of 1983.
The researchers also provided a benchmark “forecast”—an unconditional autoregressive moving-average time series based only on the past historical values of the predicted variables. This so-called ARIMA technique is the most sophisticated procedure available for extrapolating past trends into the future. Where the forecasted series has the essential elements of a random walk, the ARIMA forecast will outperform conventional extrapolation forecasts with a high degree of probability.
McNees and Ries used this benchmark projection for real GNP and for nominal GNP (which equals real GNP times the inflation rate) for forecasts made early each quarter for the current quarter and each of the three following quarters. Overall, the benchmark ranked lowest among the six other forecasts in the test. It was not the worst estimate in every single quarter, but it was always close to the worst. Its average error of 2.80 percentage points for real GNP contrasts with 2.68 for the worst conventional forecast and 2.39 for the group as a whole. Its error for nominal GNP averaged 3.83 points, as against 3.78 points for the worst conventional forecast and 3.34 points for the group.
In short, when it comes to making judgments about the outlook for the real economy, you are better off listening to professional forecasters than acting on the assumption that anything can happen or that tomorrow will simply reflect trends in effect today.
But Which Forecaster?
The yawning gap between the most optimistic and the most pessimistic participants in the Blue Chip survey for 1982 was not an extreme or even an atypical case. Forecasters often disagree, so that some of them are certain to be wrong. If some of them were wrong all the time, chances are that the poor folks would not continue in business. So we have no prima facie method for knowing which ones will be on target, or even close to the target, in any given situation.
The track records of particular forecasters are even more confusing. McNees and Ries showed that the best forecaster in any one year had little assurance of coming out on top in the following year. Furthermore, some were better than others at predicting, for instance, the course of prices or production or government spending. On some occasions, however, the superior price forecasters were better on the consumer price index than on the GNP deflator, and sometimes the results were the other way around. The best organization on government spending had a rotten record on the deficit outlook.
The most comprehensive and authoritative study of forecast accuracy confirms this erratic performance. The study, done by Victor Zarnowitz of Columbia University for the National Bureau of Economic Research and published in December 1982, was based on quarterly surveys conducted since 1968 by NBER and the American Statistical Association. These surveys cover more than 70 forecasting organizations and analyze the results for inflation, real growth, unemployment, nominal GNP, consumer expenditures on durable goods, and changes in business inventories.
Zarnowitz concluded, “It is difficult for most individuals to predict consistently better than the group. For most people most of the time, the predictive record is spotty, with but transitory spells of relatively high accuracy.”2
It is hard to find much encouragement there!
What to Do?
There is, however, a ray of hope in Zarnowitz’s finding: the probabilities are high that on average the consensus will be less wrong than one person’s forecast, and almost certainly less wrong than ARIMA projections and other mechanical extrapolations of the recent past.
But is that prospect good enough? We want a forecast that will be right. One that is less wrong than another is no help if it bears little resemblance to what actually occurs. In answer to this objection, we present some convincing evidence that the errors made by consensus economic forecasts are small enough to make them valuable for business decisions.
This evidence comes from analysis of the Blue Chip Economic Indicators surveys, which show the monthly consensus forecast of the year-to-year percentage rate of change of principal economic variables. Unlike most surveys of this kind, the Blue Chip is broadly based with an unchanging group of respondents. Moreover, because it appears monthly, it reflects views less than two weeks old when they are published. Other surveys are either irregular in timing or only quarterly. Our analysis covered the group’s predictions for real GNP and inflation (as measured by the GNP deflator) for 1977 through 1983.
Robert J. Eggert, chief economist of the Blue Chip survey (now 48 organizations), interviews his panel of leading economists and organizations early each month to get their expectations for year-over-year changes in each variable. For example, beginning in June 1980, each panel member goes on line for the 1981 outlook; the published survey shows the predictions of each participating organization and the group consensus. The panelists continue to predict the outlook for 1981 until June 1981 rolls around, at which time they jump forward to an estimate for the year ahead, or 1982. And so on. (We refer to the year being forecast as the target year and the year in which the process begins as the current year.)
In analyzing the results we sought the answers to two questions:
Did early forecasts, or those made before January of the target year, at least indicate whether production and inflation would show greater or smaller rates of change than the current year?
Did successive monthly projections move deliberately toward the ultimately correct figure or did they wander away from it or move in random fashion as the target year approached its midpoint?
The accompanying charts give the answers. Each diagram in the Exhibit shows the year-over-year percentage change from the current year to the target year as a horizontal line running from Month 13 through Month 24. The monthly prediction appears as a wiggly line beginning with Month 6 and running through Month 17 (that is, June through May). The large black dot on the vertical axis of each chart shows the average rate of change of the variable during the current year—included to indicate whether the forecasters were expecting a stronger or a weaker year to come.
Exhibit Predictive accuracy of 44 leading economists and forecasting organizations, 1977–1983
Each year has its own peculiarities, but the overall record is impressive. The group shows an average error of only 1.1 percentage points between the October forecast of real GNP for the target year and the actual figure. By and large the early forecasts capture the directional change from the current year. By and large, with the passage of time, the consensus does move toward the actual figure.
The second year in the survey, 1978, turned out to have the worst record. The GNP forecast looks outrageously low, but extensive revisions in the National Income Accounts after 1978 changed the original official estimate for 1978 GNP from 4.4% above 1977’s to 5.0%. This alteration explains why the Blue Chip consensus looks so stubbornly low.
The inflation forecast, however, has no such excuse. Here the group persisted in predicting little change from 1977 when in fact the inflation rate moved from less than 6% to 7.4%. Only late in the game, in the spring of 1978, did the panel reluctantly—and much too timidly—raise its sights toward reality.
The economists’ predictions for 1979 look better, even though the early inflation forecast was far off the mark and actually set a lower figure than the 1978 rate. A regular follower of these estimates would nevertheless have noted the steep and persistent rise in the projection for inflation, as the forecast moved from 6.7% in June 1978 to about 7.7% in December and then right on the button by May 1979. Furthermore, as we can see, the group was early and consistent in recognizing that the intensification of inflationary pressures would dampen real growth.
In each of the other five years, the early forecast either spotted the right direction or quickly corrected itself when it was wrong. Thus, by the end of the current year a good sense of the character of the target year had already been established.
Note the phrase “a good sense of the character of the target year.” In view of all the uncertainties surrounding any business decision, managers will care very little whether the outlook for real growth is 4.5% or 5.5% or the outlook for inflation is 6.2% or 6.9%. What they need is a sense of whether the overall level of business activity will be rising or falling, whether the pace will be faster or slower than in the current year, and whether the environment will encourage aggressive or cautious pricing.
In this context, we suggest that these consensus forecasts satisfy executives’ requirements. That is so even when economic conditions change radically, as they did in 1982.
In mid-1981 the Blue Chip consensus failed to recognize that a recession lay just ahead or that the rate of inflation was on the verge of collapsing. Give the economists black marks for those errors. On the other hand, the group did start off by forecasting a lower rate of inflation in 1982—though it did underestimate the degree of change. Furthermore, although the forecast for real GNP in 1982 remained positive until January of that year, it moved downward persistently and at an accelerated rate during November and December, by which time both the real GNP forecast and the inflation forecast were signaling that 1982 would be radically different from 1981.
The predictions for 1983 also deserve comment. Admittedly, most forecasters failed to foresee the vigor of last year’s recovery. Can you fault that error when, in mid-1982, close to the very depths of the recession and some five months before the trough, this group was unambiguously expecting a recovery in business activity in 1983? The economists were less aggressive early on in projecting the rapid decline in inflation, which actually fell to 4.2% from 6.0% in 1982 and 9.4% in 1981; but their estimate declined steadily and with growing momentum during the second half of 1982.
Is It All Too Easy?
Some people have argued—Milton Friedman has argued vociferously—that there is nothing impressive about this record. They maintain that forecasting rates of change from this year’s level to next year’s should be a cinch and that the increasing accuracy of estimates as we move into the target year should be no surprise.
After all, we begin the game with the information on this year’s conditions. As time passes, what is known increases and the projection element diminishes. As Friedman put it in recent correspondence, “It would be absolutely astounding if the error did not tend to decrease as the period of time went on… [The forecaster] needs to know less and less because more and more is already known.”
Friedman’s point is indisputable. Its significance for the usefulness of systematic and regular consensus forecast surveys, however, is something else again.
It would of course be preferable if the Blue Chip economists were willing to put their names to forecasts running from the current quarter to, say, four quarters ahead, in which case little known information would be used. While the Blue Chip indicators do show forecasts of that type, Eggert attaches no names of organizations to the quarterly forecasts on the basis that they are indeed likely to have larger margins of error. It is a fair guess that these forecasters would take more care in arriving at the numbers attached to their names than in furnishing numbers published anonymously.
The evidence we have given is impressive nonetheless because even the early forecasts—those made during the latter half of the current year—were largely accurate in direction. As most corporate managers make their plans for the coming year during the autumn of the current year, those early indications can make an important contribution to the planning process.
Does Anyone Care?
Is all this an intellectual exercise, or does it matter? To put the question differently, can you run a business successfully without making judgments about what the future will look like?
The answer to the second question is obvious. On the other hand, the quality of those judgments is dubious. Many business executives nurse painful memories of inventories urgently accumulated just before all the orders vanished, of irreplaceable employees laid off just before the customers started flocking back, of plants built in expectation of ever-rising sales that failed to materialize, and of prices raised so high that the company lost business to competitors.
The factor that makes these errors common is the human tendency to expect tomorrow to look like today or, worse, to extend today’s trends out to tomorrow. Change is not only unwelcome; it is also difficult to visualize. The unfortunate consequence is that surprise continually overtakes us.
Indeed, the business cycle itself, despite its varied roots, is in many ways a process in which corporations join one another in overdoing their optimism and pessimism and then having to correct their errors. Inventory ordering, borrowing, pricing, employment, and capital spending run to unsustainable heights and then fall to unsustainable lows.
The attempts of each business to correct its excesses in one direction or the other only make matters worse for the other businesses that are trying to do the same thing at the same time. Obviously, corporate executives are not paying attention to the economic consensus.
But we all do listen to forecasts in one form or another. Some of us listen to customers or competitors. Some heed our friendly bankers. Many sit up and take notice when a Wall Street guru predicts Armageddon or nirvana. One or another of these forecasts will often be correct, but none of them has enough consistency to meet even the crudest tests of statistical significance. Which means that most of the time they supply noise, not information.
Nevertheless, early warning of change is essential to avoid being caught in the gales that blow through the economy. Reliable forecasts that tell us we are approaching the late stages of an expansion or the early stages of a recession can lead us to act in anticipation of those developments and, in the process, to avoid the excesses that cause them. If such is the case, we should listen to sources that are sensitive to the development of unsustainable rates of growth or shrinkage in the economy—sources that understand the cyclical process and can recognize how it has been evolving.
This is what economists are trained to do. While the record suggests that they have less consistency as forecasters than each of them (or we!) would like to have, the law of large numbers works in their favor. We should bet with the consensus.
1. “The Track Record of Macroeconomic Forecasts,” New England Economic Review November–December 1983, p. 5.
2. “The Accuracy of Individual and Group Forecasts from Business Outlook Surveys,” National Bureau of Economic Research, Working Paper 1053, December 1982, pp. 9–10.
A version of this article appeared in the September 1984 issue of Harvard Business Review.