Measuring a Foundation's Performance

Spencer President Michael McPherson offers inside view

Michael McPherson discussed how, with no serious standards for their effectiveness or performance, one might evaluate if a foundation is doing a good job.

The Spencer Foundation, one of the nation’s premier education research foundations, must spend 5 percent of its more than $525 million endowment each year. At present, this money goes to fund grants, fellowships, and other programs that the foundation hopes will shape the future of education. Yet how can one define “success” when it comes to this expenditure—particularly when its effects will not be evident for years to come? Is “success” one well-cited study funded by a grant? Is it simply supporting the work of a large number of researchers? Or is it something else altogether?

Michael McPherson, president of the Spencer Foundation, addressed this conundrum in a special lecture on November 10 held at Northwestern University’s Evanston campus.  McPherson, an education economist, has been president of the foundation for the past 12 years.

He pointed out that because foundations serve a public purpose, they have the leeway to “invent themselves to a degree that virtually no other institution gets to do,” unlike other institutions such as universities. However, they also have little accountability for what they set out to accomplish.

“Foundations are not unregulated, but there are no serious standards or expectations of effectiveness in the work that they do,” he observed.

The lecture took place in IPR’s series on performance measurement in the public and nonprofit sectors, organized by IPR economist Burton Weisbrod. It drew nearly 80 attendees, including Northwestern’s President Morton Schapiro and Provost Daniel Linzer, as well as Lloyd A. Fry Foundation President Unmi Song and Michael Feuer, president of the National Academy of Education.

“Nobody could appreciate more than he does the importance of these issues of measuring and rewarding good performance,” Weisbrod said in introducing McPherson. 

Noted David Figlio, IPR director and education economist, “It is a rare treat for me to hear a speaker whose research has influenced my thinking so much, and whose organization has influenced my research so much.” 

Mission and Methods

“All foundations are unique, but the Spencer Foundation is unique in a very special way,” McPherson said in describing its origins and founder, Lyle Spencer. “Lyle was not only a shrewd and successful businessman, he was also a man of ideas.”

Spencer made his money from education-related testing, and his company produced the first national merit scholarship qualifying test. When he established his eponymous foundation in 1962, its mission became “investigating ways in which education can be improved around the world. Broadly conceived, wherever learning occurs.”

“He had a mission, and he had a strategy,” and clearly defined ones at that, McPherson said, and this is where one should start when evaluating an education research foundation.

“He thought the way to improve education was to search for effective new ideas. That is, to do research,” McPherson said.

McPherson shares a smile with a long-time collaborator, Morton
Schapiro, Northwestern University president, professor, and IPR fellow.

The next step in evaluating a foundation’s success, according to McPherson, is to determine if “the major activities they are pursuing make sense in light of the strategy they have.” 

Taking the Spencer Foundation as an example, McPherson explained his organization’s three main, interconnected activities: capacity building; communications and networking; and grantmaking.

Education is not going to be transformed by any “magic bullets,” McPherson stated, and that is why it is important to build the capacity of future generations of education researchers. To this end, Spencer invested heavily in improving the research environment at education schools as well as by supporting the creation of valuable research databases. But perhaps the biggest contribution to research capacity has been “investing in the human beings who will be the future generation of education researchers” through graduate and postgraduate fellowships. 

Second, Spencer aims to build communication and networking links between education researchers, those who might benefit from their research, and the public—who, after learning about the research, “can ask better questions of their elected officials,” McPherson said. To improve public understanding, the foundation launched a program with the Columbia School of Journalism to train journalists with social science chops to “write seriously about education.”

Third, grantmaking represents the meat of the foundation’s work and is perhaps “the most difficult area in which to assess progress,” McPherson said. The Spencer Foundation funds field-initiated, peer-reviewed research, requests for proposals, midcareer grants,  and more.

Evaluating an Initiative’s Impact

While some initiatives might be easier to measure, others—such as a journalism training program—are more difficult to gauge, with their effects ostensibly invisible for years. 

In 2010, the Spencer Foundation embarked on an “ambitious evaluation” of its dissertation and postdoctoral fellowship programs, despite the fears of the program’s beneficiaries and the organization administering the program, McPherson said. The evaluation comprised a regression discontinuity analysis led by IPR education researcher and statistician Larry Hedges on the quantitative side and a qualitative evaluation by current William T. Grant Foundation president Adam Gamoran.

McPherson said he was “explicit” that, “We’re going to do this, and if we conclude this work is not having the effects we think it has, we are going to end it.” 

It was a “serious evaluation” that showed remarkably positive quantitative effects for both the postdoctoral and dissertation fellows. Evidence suggested that quality interactions among the fellows and mentor support helped explain the positive impact.

When measuring a research foundation’s activities, look to see if they are “skillfully executed,” McPherson summed up. “Try to measure what you can measure responsibly. But don’t limit yourself only to things that you can measure well.”

Beyond the aforementioned criteria—clear missions and strategies and skillfully executed, relevant activities—McPherson added one final criterion of his own: “It’s really important that you ask the question, ‘Is the foundation itself alive with ideas? Are people excited to talk to researchers? Do we inspire open-minded inquiry?’”

“That’s a tough standard; I don’t think we always meet it, but I think it should always be our ideal,” he concluded.

Michael McPherson is president of the Spencer Foundation. Burton Weisbrod is John Evans Professor of Economics, an IPR fellow, and chair of IPR's Performance Measurement and Rewards research program. David Figlio is IPR director and Orrington Lunt Professor of Education and Social Policy and of Economics.  

Photo credit (top, inset): Sally Ryan