Recent Research: Quantitative Methods for Policy Research


Improvements to Experimental Design and Quality

Improving Education Experiment Designs

A project led by Larry Hedges, IPR statistician and research methodologist, with the University of Chicago’s Eric Hedberg and IPR project coordinator Zena Ellison, seeks to establish design parameters for education experiments at state, local, school, and classroom levels. Many current education experiments use designs that involve the random assignment of entire pre-existing groups, such as classrooms and schools, to treatments, but these groups are not randomly composed. As a result, statistical clustering occurs (when individuals in the same group tend to be more alike than those in different ones). Experimental sensitivity depends on the amount of clustering in the design, which is difficult to know beforehand. This project, which has received funding from the U.S. Department of Education’s Institute of Education Sciences (IES), seeks to provide empirical evidence on measures of clustering, such as intraclass correlations and related design parameters, and make them available to education researchers. It has already produced important results, including new methods of calculating standard errors for intraclass correlations and the software to compute them—and will support future study designs. Project data are publicly available on IPR’s website. Hedges is Board of Trustees Professor of Statistics and Education and Social Policy and directs IPR’s Center on Improving Quantitative Methods for Policy Research.

New Prevention Research Standards

A decade ago, the Society of Prevention Research endorsed standards for evidence related to research on interventions for prevention, a rapidly evolving field. By doing so, the society intended to render research reviews more consistent and determine what evidence was necessary to demonstrate an intervention’s effectiveness. In its flagship journal Prevention Science, IPR social psychologist Thomas D. Cook and his colleagues review previous standards and introduce “next-generation” ones. They argue that developing and testing new interventions can form the basis of larger, more effective prevention systems, and that requires a more flexible research cycle of preventive interventions. While other researchers focus solely on large-scale interventions that might change policy or legislation, Cook and his colleagues call for research on smaller group interventions, such as those aimed at improving childcare and family services. They assert that these smaller interventions were as likely as broad policy changes to spur meaningful change. The researchers recommend an open-minded approach to holding onto rigorous standards for evidence, reasoning that improving a person’s well-being can be accomplished by simpler means, like disseminating knowledge about what children and adolescents need to do to develop successfully. Cook is Joan and Sarepta Harrison Chair of Ethics and Justice.

Improving Time-Series Designs

Some programs do not lend themselves well to randomized control trials or regression discontinuity designs. One example of this occurs when researchers attempt to evaluate national or state programs— which are often expensive, apply to the entire population, are expected to have a broad impact, and are open to all who are eligible. In these cases, when researchers find it more difficult to compare groups that have been exposed to these interventions and groups that have not, they often use interrupted time-series (ITS) designs. In the Journal of Research on Educational Effectiveness, Cook and two former IPR postdoctoral fellows, Manyee Wong of the American Institutes for Research and the University of Wisconsin–Madison’s Peter Steiner, examine how the ITS design can conflict with threats to internal validity. They contend a better design is needed and experimented with adding more design elements to the basic ITS structure, creating a multiply supplemented ITS design. Using it to estimate the national effects of No Child Left Behind, they revealed that it affected eighth-grade math—a new discovery, since other researchers only confirmed positive effects on fourth-grade math—and possible effects on fourth-grade reading. They speculate as to why No Child Left Behind affected achievement in this way.

Accessibility in Survey Experiments

IPR political scientist James Druckman, with sociologist Jeremy Freese, now at Stanford University, are co-principal investigators of Timesharing Experiments for the Social Sciences (TESS), an online platform for survey experiments that has received NSF support. TESS aims to make survey experiments both easier and cheaper for researchers to conduct. Studies are in a variety of social science areas, including anthropology, economics, psychology, political science, sociology, communication studies, and cognitive science. Researchers have analyzed the use of Amazon’s Mechanical Turk (MTurk), a fast-growing—and cheaper—alternative to conducting survey experiments to examine crowd-sourced data. They have identified conditions when investment in a probability sample is necessary. Investigators are now using MTurk to explore the robustness of TESS experimental results over time. TESS is in the process of partnering with the Open Science Foundation to provide an unparalleled resource for developing studies that build on past work, as well as for reanalyzing data and replicating extant work.

Data Use, Quality, and Cost in Policy Research

Bias Reduction in Quasi-Experiments

Observational studies—in which researchers observe subjects and measure variables without assigning treatments—must account for selection bias because individuals are not selected into treatment groups at random. In the Journal of Research on Educational Effectiveness, Cook and his colleagues, including Steiner, consider ways to reduce selection bias in quasi-experiments. Specifically, they consider the selection of covariates—variables that can be used to control for selection bias—in quasi-experiments that have many covariates, but where little is known about the process used to generate data. The researchers observed the effects of covariate selection in two datasets, each of which has at least 150 covariates. They found that if the number of covariates within a domain was held constant, increasing the number of domains increased the bias reduction. But if the number of domains was held constant, including more covariates also increased the bias reduction. Drawing from the data, they determined that the greatest bias reduction occurred when each domain contained at least five covariates, but that the number of domains was more important than the number of covariates in each domain. The researchers hold that more attention should be paid to selecting covariates at the design stage, as the choice of covariates and their reliable measurement is key to decreasing selection bias in quasi-experiments.

More Accurate Earthquake Hazard Maps

In 2011, the 9.0-magnitude Tohoku earthquake and the resulting tsunami killed more than 15,000 people and caused nearly $300 billion in damages. The shaking from the earthquake was significantly larger than Japan’s national hazard map had predicted, devastating areas forecasted to be relatively safe. Such hazard-mapping failures prompted three Northwestern researchers—Spencer, geophysicist and IPR associate Seth Stein, and IPR graduate research assistant Edward Brooks—to search for better ways to construct, evaluate, and communicate the predictions of hazard maps. In two articles, the scholars pointed out several critical problems with current hazard maps and offered statistical solutions to improve mapping. Currently, no widely accepted metric exists that can gauge how well one hazard map performs compared with another. In the first article, the researchers used 2,200 years of Italian earthquake data to highlight several different statistical models that could be used to compare how well maps work and to improve future maps. Since underestimating an earthquake’s impact can leave areas ill-prepared, the scholars developed asymmetric models that weighed underprediction heavily and could account for the number of affected people and properties. In a second article, the scholars offered further methodological guidance on when—and how—to revise hazard maps using Bayesian modeling, which allows multiple probabilities to stack up with evidence. Stein is William Deering Professor of Earth and Planetary Sciences, and both articles began as IPR working papers.

Census Accuracy and Benefit Analysis

Along with IPR graduate research assistant Zachary Seeskin, Spencer considers how to measure the benefits that would stem from improving the accuracy of the next U.S. Census in 2020 in two IPR working papers. In the first, the researchers took into account two high-profile census uses—apportionment and fund allocation. Apportionment of the 435 seats in the U.S. House of Representatives is based on census tallies, and distortions in census results mirror distortions in the number of seats allocated to each state. Spencer and Seeskin expect that roughly $5 trillion in federal grant and direct assistance monies will be distributed at least partly on the basis of population and income data following the 2020 Census, highlighting how distortions in census results also cause distortions in the allocation of funds. After describing loss functions to quantify the distortions in these two uses, they then undertook empirical analyses to estimate the expected losses arising from alternative profiles of accuracy in state population numbers. In a second working paper, Spencer and Seeskin performed a cost-benefit analysis of the 2016 Census of South Africa. The South African government uses its census for funding allocations and had to decide whether to conduct a 2016 census or use an alternative method. Conducting a census would provide it with up-to-date data on births, deaths, and migration, but the government could also consider a more cost-effective option using less exact, but also less expensive, postcensal population estimates. Based on the researchers’ evidence, the government took the decision to forego the 2016 census and will seek to improve its capacity for producing postcensal estimates.

Protecting Privacy in State Datasets

IES has spent more than $600 million helping states develop longitudinal data systems to better understand and improve American school systems performance. Yet concerns about protecting privacy and the Federal Education Rights and Privacy Act (FERPA) are creating data-access barriers for researchers. With funding from IES, NSF, and the Spencer Foundation, and with the cooperation of a dozen states, Hedges and his research team are investigating methods to make large datasets available while still protecting individuals’ privacy.

Prizes and Minorities’ Biomedical Careers

Though minority groups are the fastest-growing segments of the American population, they are underrepresented in biomedicine: Less than 8 percent of nearly 7,000 doctorates in the biological and biomedical sciences in 2007–08 were awarded to African Americans and to Hispanics. As such, universities and independent organizations have implemented programs to encourage underrepresented groups to complete degrees and enter scientific research careers, such as the Annual Biomedical Research Conference for Minority Students’ prizes for minority student research. Hedges and IPR research associate Evelyn Asch seek to understand whether such prizes encourage minority students to enter biomedical research careers—and if they do so, why. Some of the factors the researchers consider are the effects of changing perceptions of minority students, once they are prizewinners; students’ beliefs in their ability to be successful scientists; and students’ sense of themselves as scientists. The researchers will also see if one individual’s winning a prize affects others in the same institution. Data are being collected from the individuals, public sources, and surveys in which winners explain how winning a prize affected them, and in which runners-up detail how the competition for prizes affected them. The project will offer insight into how to increase the number of minority biomedical researchers. The National Institute of General Medical Sciences provides project support.


Facilitation of Research Networks and Best Practices

Database of Research Generalizability

The importance of STEM (science, technology, engineering, and math) knowledge to innovation and economic growth has prompted many school districts to implement interventions promoting STEM in their schools. But experiments evaluating the effectiveness of these interventions give insight into just one population of students in just one school or school district, leaving researchers at a loss as to how to use results from one study to make statistical claims about another population or place. Supported by a National Science Foundation (NSF) grant, Hedges and Columbia University’s Elizabeth Tipton, a former IPR graduate research assistant, are creating a statistical approach that will make it possible to do this, and also to plan education experiments to be more generalizable to other populations. The project will develop a public-use database to better understand the populations to which researchers might want to generalize the results of STEM studies. The new statistical method will be tested, with support from the NSF and IES, using data from 20 studies, and the researchers will share their knowledge of these methods through training at national conferences.

Node Sampling Scheme for Network Data

In cellular biology, data collection often uses “bait-prey” technologies to determine relationships between pairs of proteins, with one protein acting as “bait” to find an interacting protein, its “prey.” One such bait-prey technology is CoIP immunoprecipitation (CoIP). Due to budget, logistical, and other constraints, it is often too costly to map all relationships among proteins using CoIP. IPR statistician Bruce Spencer and Northwestern’s Denise Scholtens came up with a sampling scheme to select baits for CoIP experiments that incorporates accumulating data. Their sampling scheme led to a marked increase in how many complexes were correctly estimated after each round of sampling. Further development of these methods could expand researchers’ ability to measure and understand a wide array of network data.

Estimating Network Degree Distributions

Surveying a large online social network like Flickr, YouTube, or Amazon is prohibitively costly for researchers, who almost always choose to study smaller samples. But do smaller samples allow one to generalize results to the larger network? In the Annals of Applied Statistics, Spencer and Boston University’s Yaonan Zhang and Eric Kolaczyk look at this issue as a linear- inverse problem and propose a least-squares estimator that solves it. By simulating this approach and applying it to online social media networks, they found that their method performs well regardless of the type of network, even when the sampled network was much smaller than the larger network. The method, as they demonstrated, reconstructed distributions of various subcommunities within online social networks corresponding to Friendster, Orkut, and LiveJournal.

Data Biases in Social Network Samples

Businesses, government agencies, and others increasingly rely on big data to understand individual behavior, often culling their data from social network sites like Facebook and Twitter. In the The Annals of the American Academy of Political and Social Science, IPR communication studies researcher Eszter Hargittai uses survey data to establish that people did not randomly select into various social network sites. In fact, the opposite was true: Many sociodemographic factors, and even Internet skills and use, determined which social network sites a person used. The study suggests that the biases reflected in users of social network sites limit the generalizability of findings that only use certain such sites as their samples. For instance, a study based on sociobehavioral data gathered on Facebook users might not apply to Internet users more generally, let alone the population as a whole. The trove of behavioral data that lies in social network sites is vast and expanding, but Hargittai, who is Delaney Family Professor, urges researchers to address such limitations by triangulating their findings when possible and explicitly stating the limits of their sample.

Matched Administrative Data Network

With support from NSF, IPR Director David Figlio, an education economist, continues to lead a national effort to bring scholars, policymakers, and administrators together to develop “next-generation” datasets that link administrative data, such as welfare and school records, to population data, such as birth certificates and death records. While creating these large-scale datasets requires collaboration across levels of government and scholarly disciplines that can sometimes be complicated, it also creates opportunities to obtain valuable insights and knowledge, especially when evaluating early childhood investments and interventions. In an IPR working paper, Figlio, with IPR research associate Krzysztof Karbownik and Kjell Salvanes of the Norwegian School of Economics, examine the use of matched administrative datasets in education research in the United States and Norway. They identified how these sets can inform issues ranging from classroom technology to class-size effects, and highlighted how access to comprehensive data can lead to better research designs and data-driven education policies. A number of IPR scholars are part of the data network, including economists Jonathan Guryan and Diane Whitmore Schanzenbach, social demographer Quincy Thomas Stewart, psychobiologist Emma Adam, and biological anthropologists Christopher Kuzawa and Thomas McDade. Figlio is Orrington Lunt Professor of Education and Social Policy and of Economics. McDade is Carlos Montezuma Professor of Anthropology. 

Interdisciplinary Training in Methodological Innovation

Supporting Learning Through Research

The annual conferences of the Society for Research in Educational Effectiveness, co-founded by Hedges, continue to serve as national meeting points for those interested in the latest in education research. Its spring 2015 conference took place from March 5–7 in Washington, D.C., on “Learning Curves: Creating and Sustaining Gains from Early Childhood Through Adulthood.” IPR faculty, including Figlio, Cook, and Hedges, were on hand to present some of their latest projects, and keynote speeches were given by some of the nation’s leading education researchers, who included the University of California, Irvine’s Greg Duncan, an IPR faculty adjunct, and Mark Greenberg of Pennsylvania State University.

IES-Sponsored Research Training

The ninth Summer Research Training Institute on Cluster-Randomized Trials (CRT), sponsored by IES and its National Center for Education Research (NCER), took place from July 20–30 in Evanston. Organized by Hedges and Spyros Konstantopoulos of Michigan State University, the institute seeks to provide researchers from around the country with a rigorous methodological framework and perspective. The sessions encompassed a broad range of topics in the design and execution process, from relevant statistical software to more conceptual challenges, such as the framing of results. The institute culminated in a mock proposal process, allowing groups to receive feedback from their fellow participants and institute faculty, thereby improving their readiness to apply for competitive IES grants. Sessions were also taught by former IPR graduate research assistants Elizabeth Tipton of Columbia University and Chris Rhoads of the University of Connecticut. Cook ran another IES/NCER Summer Research Training Institute on Design and Analysis of Quasi-Experiments in Education with his longtime collaborator, the University of California, Merced’s Will Shadish, who sadly passed away in March 2016. Other organizers included former IPR graduate research assistant Vivian Wong, now at the University of Virginia, and former IPR postdoctoral fellows Coady Wing, now at Indiana University, and Peter Steiner at the University of Wisconsin–Madison. The 2015 institute exposed participants to a variety of quasi-experimental designs, which are distinct in their use of methods other than randomization to compare groups. Working closely with workshop leaders and fellow attendees to understand and analyze these designs, participants were able to hone their methodological skills while also making important connections with other education researchers. NCER Commissioner Thomas Brock attended the CRT institute’s certificate ceremony, calling research and training like those offered in the workshops, “one of the most important and useful things IES can do to ensure that the studies we fund are high quality.”