Recent Research: Quantitative Methods for Policy Research


Methodological Training for Education Research

IES-Sponsored Research Training

Aiming to increase the national capacity of researchers to develop and conduct rigorous evaluations of the impact of education interventions, the National Center for Education Research in the Institute of Education Sciences (IES), the research wing of the U.S. Department of Education, continued to support a training workshop, co-organized by Larry Hedges. Hedges, along with Michigan State’s Spyros Konstantopoulos, led the seventh Summer Research Training Institute on Cluster-Randomized Trials (CRTs) in education from July 15–25 at Northwestern University. Thirty researchers from around the country participated in the two-week training, which focuses on the use of cluster-randomization— a methodological tool that helps account for the group effects of teachers and classrooms when measuring an intervention’s effects on individual student achievement. The intensive sessions cover a range of specific topics in the design, implementation, and analysis of education CRTs, from conceptual and operational models to sampling size and statistical power. Participants also learn to use software such as STATA and HLM to conduct hierarchical data modeling and work in groups to create mock funding applications for an education experiment. IES also supported the development of the new Research Design Workshop for Faculty from Minority-Serving Institutions that will launch in summer 2014. The three-day workshop aims to provide an introduction to the basics of quantitative research design and analysis used in education research and development. It also will seek to provide participants with a conventional set of terminology and perspectives that are widely used in the quantitative education research community. Hedges and Konstantopoulos will lead the workshop with Chris Rhoads, a former IPR graduate research assistant now at the University of Connecticut, and Jessaca Spybrook of Western Michigan University.

Improving the Design and Quality of Experiments

Regression-Discontinuity Designs

A type of regression-discontinuity design (RDD) known as “sharp” has three key weaknesses as compared with the randomized clinical trial (RCT). It has lower statistical power, it is more dependent on statistical modeling assumptions, and its treatment effect estimates are limited to the narrow subpopulation of cases immediately around the cutoff, which is rarely of direct scientific or policy interest. In an article in the Journal of Policy Analysis and Management, IPR social psychologist Thomas D. Cook and former IPR postdoctoral fellow Coady Wing of the University of Illinois at Chicago examine how adding an untreated comparison to the basic RDD structure can mitigate these three problems. They conducted a within-study comparison that evaluates the performance of the pretest and post-test RDDs relative to each other and to a benchmark RCT. They evaluated the Cash and Counseling Demonstration Experiment, a study that compared health, social, and economic outcomes for Medicaid beneficiaries in three states that received spending accounts to procure home- and community-based health services. They show that the pretest-supplemented RDD improves on the standard RDD in multiple ways that bring causal estimates and their standard errors closer to those of an RCT, not just at the cutoff, but also away from it. Cook holds the Joan and Sarepta Harrison Chair in Ethics and Justice.

Propensity-Score Analysis

In an article published in the Journal of Methods and Measurement in the Social Sciences, Cook, William Shadish of the University of California, Merced and Peter Steiner of the University of Wisconsin–Madison, a former IPR postdoctoral fellow, critique previous research on propensity-score analysis. Cook and his colleagues agree that prior research was right to caution that propensity-score analysis might yield quite different results than those from a randomized experiment, but they question the “ideal” test of whether propensity-score matching in quasi-experimental data could approximate the results of a randomized experiment. Breaking down the previous researchers’ test by criteria, they reveal that it tells little about whether propensity-score analysis can work in principle. They urge methodologists in this field to come up with better ways to construct an empirically based theory of quasi-experimental practice—one that details the conditions under which nonrandomized experiments might provide good answers about cause-and-effect relationships.

New Parameters for State Test Scores

IES is also sponsoring a project co-led by IPR statistician Larry Hedges, with IPR project coordinator Zena Ellison, that seeks to establish new design parameters for education experiments at state, local, school, and classroom levels. Many current education experiments use designs that involve the random assignment of entire pre-existing groups, such as classrooms and schools, to treatments, but these groups are not themselves composed at random. As a result, individuals in the same group tend to be more alike than individuals in different groups, a phenomenon known as statistical clustering. The sensitivity of experiments depends on the amount of clustering in the design, something difficult to know beforehand. This project seeks to provide empirical evidence on measures of clustering, such as intraclass correlations and related design parameters, and make these available to researchers who design studies in education research. These will be publicly available on IPR’s website. The project has already produced important results, including new methods of calculating standard errors for intraclass correlations and software to perform them. Preliminary work indicates that two-level intraclass correlations (students nested within schools) vary across the participating states. However, in a study Hedges co-authored with Eric Hedberg of the National Opinion Research Center and IPR graduate research assistant Arend Kuyper, the researchers’ findings from three-level models (students nested within schools nested within districts) suggest that this variation might be related to district structures, and that within-district intraclass correlations are more consistent across states. Hedges is Board of Trustees Professor in Education and Social Policy.

Multiple-Frame Sampling for Population Subgroups

For studies where the objective is to estimate the prevalence rate of members of a sampled population who fall in a rare subgroup, IPR statistician Bruce Spencer and colleagues examine the advantages of multiple-frame samples compared with single-frame household samples for improving statistical precision of prevalence estimates for the same cost or less. In an article published in the Proceedings of the Survey Research Methods Section, they examine relative cost-efficiency for simple unclustered samples and then consider the effect of cluster sampling. Findings are illustrated for the case where the subgroup consists of victims of rape and sexual assault (RSA) in a civilian non-institutionalized population of persons 12 years and older. Two sample designs are considered: first, dual-frame sampling from a conventional household frame plus a frame constructed from police reports of RSA, versus single-frame sampling from the household frame. They conclude that a dual-frame design will be more cost-effective to the extent that RSA prevalence among police reports exceeds the RSA prevalence in the population as a whole. However, gains in the dual-frame design are diminished in direct relationship to the size of intraclass correlation when cluster sampling is considered.

Data Use, Quality, and Cost in Policy Research

Big Data Network

On October 10–11, more than 50 academics, policymakers, and practitioners gathered at Northwestern University for an inaugural meeting and workshop, organized by IPR. They aim to establish a network of faculty, policymakers, and practitioners from around the nation to examine construction of “next-generation” data sets. The National Science Foundation (NSF)-supported group is led by IPR Director and education economist David Figlio and Kenneth Dodge of Duke University. The federal government has spent more than half a billion dollars so far on building longitudinal, state-level data sets around the nation. While it has become a national priority, states’ data collection efforts are still in their infancy, with little in the way of best practices or minimum guidelines to optimize data collection, use, or a host of related issues. At the meeting, members of the network shared new research made possible by big data, and discussed how they can work to improve large-scale administrative data sets in the United States. The members of the network hope to create a prototype using data from North Carolina and Florida, states that already have such data sets. Creating a comprehensive data set requires close collaboration between scholars, policymakers, and data administrators at many levels of government. Its members include three former governors, two state education superintendents, and the first IES director. The other critical element to making effective use of such data sets is cross-disciplinary knowledge and expertise. IPR faculty economists Jonathan Guryan and Diane Whitmore Schanzenbach, social demographer Quincy Thomas Stewart, psychobiologist Emma Adam, and biological anthropologists Christopher Kuzawa and Thomas McDade are all members.

Decision Theory for Statistical Agencies

Government data collections are tempting targets for budget cutters—not because the budgets are large, but because ignorance about data use makes the effects of data reductions hard to see. There is a reason that so little is known about data use, however. Inferring data-use impacts is a problem of assessing the causal effect of an intervention—people either observe what happened when the data program was conducted, or what happened when it was not conducted, but not both. With funding from the NSF, Spencer and IPR economist Charles F. Manski are conducting a cost-benefit analysis of the 2020 Census. Almost half a trillion dollars per year in federal funds are allocated by formulas that involve census data, so while lawmakers are looking to keep census costs down, it is imperative to question whether the cost controls will lead to acceptable levels of accuracy. Because data use is so complicated and difficult to study, Spencer argues that new theory is needed to develop, analyze, and interpret case studies for data use in policymaking and research. The practical implications of research findings are important for statistical agencies, in the long and short term, to understand and communicate the value of data programs the agencies might carry out. The researchers propose to extend and apply statistical decision theory to attack such basic questions. The research will focus on data use, data quality, data cost, and optimization. Manski is Board of Trustees Professor in Economics.

Framing Methods and Pretreatment Effects 

Treatment Response for Social Interactions

Manski studies identification of treatment response in settings with social interactions, where personal outcomes might vary with the treatment of others. Defining a person’s treatment response to be a function of the entire vector of treatments received by the population, he looks at identification when non-parametric shape restrictions and distributional assumptions are placed on response functions. An early key result of this work is that the traditional assumption of individualistic treatment response is a polar case within the broad class of constant treatment response (CTR) assumptions, the other pole being unrestricted interactions. Important non-polar cases are interactions within reference groups and anonymous interactions. His analysis consists of three parts: first examining identification under Assumption CTR alone, then strengthening this assumption to semi-monotone response, then discussing derivation of these assumptions from models of endogenous interactions. Manski sees these contributions that were published in the Econometrics Journal as providing a basis for further research.

Interdisciplinary Methodological Innovation 

Time-Sharing Experiments (TESS)

IPR sociologist Jeremy Freese has co-led TESS, or Time-Sharing Experiments for the Social Sciences, since 2008. Last year, TESS received renewed NSF funding and Freese was joined by IPR political scientist and associate director James Druckman as co-principal investigator. Launched in 2001, TESS offers researchers opportunities to test their experimental ideas on large, diverse, randomly selected subject populations. Faculty, graduate students, and postdoctoral researchers submit their proposals for peer review, and if accepted, TESS then fields the Internet-based survey or experiment on a random sample of the U.S. population at no cost. TESS is especially vital for scholars, insofar as it enables them to implement major research projects for free. This might be particularly relevant for younger scholars, and for this reason, they launched the first annual Special Competition for Young Investigators in 2013, open only to graduate students or individuals who completed their PhD within the past three years. TESS proposals are accepted based on external reviews.

IPR political scientist Laurel Harbridge’s work on Americans’ bipartisanship preferences and IPR social psychologist Jennifer Richeson’s work on her “majority-minority” nation research were done in conjunction with TESS. Former IPR graduate research assistant Thomas Leeper, now at Aarhus University in Denmark, also received TESS funding for an examination of population-based studies.

TESS also offers the possibility of simultaneous cheap, diverse—but not representative or probability-based—recruitment using Amazon Mechanical Turk (MTurk). MTurk is an online crowdsourcing platform launched by Amazon in 2005, which connects “workers” with “requesters.” According to Amazon’s website, requesters ask workers to complete “HITs,” or human intelligence tasks. HITs are self-contained tasks that a worker can work on, submit an answer, and collect a reward for completing. This allows researchers to compare their results from a representative population on a typical TESS study with one on the unrepresentative MTurk platform. Druckman, Freese and several graduate students are completing a number of studies comparing MTurk to TESS and other probability sampling surveys. Freese is Ethel and John Lindgren Professor of Sociology and Druckman is Payson S. Wild Professor of Political Science.

Research Prizes for Minorities

The need for the United States to compete globally in science continues to rise, but minority groups, despite being the fastest-growing segments of the population, are grossly underrepresented in these fields. One attempt at increasing the number of minority students entering careers in biomedicine is the use of prizes for undergraduate minority student research awarded by the Annual Biomedical Research Conference for Minority Students. Although research prizes are common in science, it is unclear whether they have effects on the careers of scientists, and if so, how they produce these effects. With funding from the National Institute of General Medical Sciences, Hedges and Evelyn Asch, an IPR research associate, are conducting a study of this research prize competition that will explore the mechanisms by which research prizes might affect undergraduate minority students’ career success as scholars. The project results will help provide answers about how to increase the number of minority students who become biomedical researchers and why such awards could be a potent tool in transforming students into scientists.

Advancing Education Research

The Society for Research on Educational Effectiveness (SREE) organized a conference in March on “Learning from Mixed Results” and one in September on “Interdisciplinary Synthesis.” Members of the IPR community who presented research findings included fellows Cook, Figlio, Hedges, and Jonathan Guryan, as well as postdoctoral fellow Martyna Citkowicz, IPR graduate research assistant James Pustejovsky, and former IPR graduate RAs Kelly Hallberg of the American Institutes for Research, Elizabeth Tipton of Columbia University, and Vivian Wong of the University of Virginia. Plenary speakers at the conferences were, in March, Christopher Jencks and Catherine Snow of Harvard University, Marshall Smith of the Carnegie Foundation for the Advancement of Teaching, and Aimee Rogstad Guidera, founder of the Data Quality Campaign. James Pellegrino of the University of Illinois at Chicago and New York University's Cybele Raver spoke in September.

Hedges is co-founder of SREE, serving as its president since 2009 when the society established a base at IPR. During this time, the society’s membership, composed in part of individuals in the social sciences, behavioral sciences, and statistics—all of whom endeavor to advance research on causal relations in education, has tripled. SREE’s dissemination program includes the Journal of Research on Educational Effectiveness, a peer-reviewed research publication focused on education methods, evaluation, interventions, and policy, now in its seventh volume. Professional development activities include workshops and short courses, running from three hours to three days during conferences, and a summer program at Stanford University.