Research News

Helping Central Banks Think About Uncertainty and Expectations

IPR economist Charles Manski discusses survey data, probability predictions at international forum

Charles Manski's findings regarding people's expectations reveal ways for central banks to understand uncertainty
when forming monetary policies.

IPR economist Charles F. Manski was an invited presenter at the third International Monetary Fund (IMF) Statistical Forum in Frankfurt, on November 19 before a group of approximately 200 international economists, statisticians, central bankers, and other invited guests.

“How do regulatory changes actually impact upon the behavior of households, firms, and banks—what was the aggregate impact?” said moderator Claudia Buch, deputy president of the German Central Bank, in introducing Manski for the first session on "The Relevance of Microdata for Evidence-Based Policies” (The full conference agenda can be found here.)

“I’ve personally enjoyed a lot reading his book Public Policy in an Uncertain World,” and learned from the examples in the book, she continued in introducing Manski. “We have risks of acting too late; we have risks of maybe pursing the wrong policy goals. So there’s a lot of uncertainty—which doesn't mean that we shouldn’t act, but we have to be aware of how we are dealing with this.”

Heterogeneity of Expectations

Manski began by saying that he decided to present his work on expectations because it was particularly important for central banks in situations where heterogeneity is a central determinant of policy outcomes.

“There are many situations in which heterogeneity is important,” Manski said. “I think one of the most critical is the heterogeneity of expectations that persons may hold for uncertain future events.”

Economists tend to believe that the way people think about future events is not homogenous. For instance, one person might know more about the state of the economy than another, and so might think it more likely that a crash will occur.

Expectations are “basically what goes on inside people’s heads,” formed by either how they think or by the models they use to interpret events, Manski explained, noting that for many years, it was “quite rare” that economists would collect survey data on such expectations.

Charles Manski

“We were all taught to believe what people do and not what people say,” Manski said, in talking about his early experiences in graduate school at MIT.

Economists still needed the information, so what did they do? They preferred to infer expectations by collecting behavioral data and combining the data with assumptions about how people form their expectations.

“It is clear that it is a daunting task to infer people’s expectations from the choices they make,” Manski said. “You have to speculate about what information processes people use, what information people have, and how they use the information they have to inform their expectations.”

However, collection of such data has increased over the past 25 years. Economists regularly collect microdata through household surveys conducted in many countries on ordinary people's views regarding a wide variety of macroeconomic events, like future GDP growth and inflation and stock market performance, as well as personal expectations regarding job loss, death, future income, and buying durable goods. And there’s also a long history of asking professional forecasters about macroeconomic trends.

“So there’s now a lot of expectations data available,” he said.

Dealing with Expectations Data

Manski summarized his research insights on how to collect, analyze, and interpret these data. His presentation drew on his extensive past work on measuring probabilistic expectations, and several articles and papers in particular.

He first described his work with Jeff Dominitz of Resolution Economics studying people’s expectations for stock returns. Much of the research in finance dismisses the idea that people could hold different beliefs about stock market returns, Manski pointed out, with experts considering such beliefs as either “nonexistent” or “unimportant.” In their study, Manski and Dominitz asked respondents in several U.S. surveys a series of questions about various financial and household expectations. One example was the percent chance that a $1,000 investment in a diversified mutual fund would increase in value in the year ahead. Across all of the different surveys they examined, they found responses varied substantially by sex, age, and education levels, yet according to longitudinal data, individual beliefs seemed to remain largely stable over time.

“It’s reasonable to think of the population as a mixture of expectation types, each forming beliefs in a stable but different way,” Manski said.

They then characterized people’s expectations into three different types based on standard finance theories, and studied their prevalence. The found three types exist. One—the persistence model, or people who think if the stock market is going up that it will continue to go up in the near future—was the most prevalent, though even it only accounts for less than half of respondents. So it is “key” to acknowledge the heterogeneity or differences in people’s expectations, Manski said.

The Pros of Probability Predictions

He then turned to the long-standing practice of asking experts for point predictions on aggregate statistics. The problem is that these point predictions do not convey any of their uncertainty. So in a 2009 article, written with the University of California-San Diego’s Joseph Engelberg and Penn State’s Jared Williams, the researchers compared how the point predictions for GDP growth and inflation compared with probabilistic responses made by professional forecasters. They used data from the Survey of Professional Forecasters (SPF), which has asked for both point and probability predictions for more than 30 years.

Manski and his colleagues found that the differences were not large, but SPF forecasters’ point predictions tended to give a more favorable view of the economy than the “means, medians, or modes” of their probability distributions.  This finding, and the basic fact that point predictions reveal nothing about experts’ uncertainty, led the researchers to conclude that it would be better to ask for probabilistic predictions than point forecasts. The researchers then went on in a 2011 article to discuss how this might affect consensus forecasting, which takes point predictions from individual forecasters and lumps them into a combined, or consensus, forecast, and examines how the consensus forecasts vary over time. The problem is that such a method invariably leads to the loss of three critical pieces of information—about experts’ uncertainty, disagreement, and changes in the forecasters who responded from quarter to quarter.

“I think for forming monetary, macroeconomic policy, and so on, it’s important to understand people’s uncertainty as well as the central tendency of the forecast,” Manski said.

Their suggestion to avoid information loss? Include each of the approximately 40 forecasters’ predictions as an individual arrow, either ascending, descending or flat, to provide a more revealing, and heterogeneous, visual display. Manski showed a graph that tracked predictions for GDP growth after 9/11 and those for Ben Bernanke’s impact on inflation when he was appointed as chairman of the U.S. Federal Reserve.

“You can just read this off visually from the graph,” Manski summarized. This allows people to gather more useful information that reveals more about the central tendency and spread of forecasts as well as the differences between individual forecasters.

Watch a video recording of Manski's presentation, and his slides can be viewed here.

Charles F. Manski is Board of Trustees Professor in Economics and an IPR fellow.