PhD, Statistics, Certificate in Education Science, Northwestern University, 2011
Statistician Elizabeth Tipton’s research focuses on the design and analysis of field experiments, particularly on their external validity and on how to make causal generalizations from them. She is developing methods and tools to improve the generalizability of large randomized trials, especially in education and psychology. Her research in meta-analysis—the combination of results across many smaller trials—examines modeling and adjusting for dependence between effect sizes.
Tipton’s research has been published in the Journal of Educational and Behavioral Statistics, Statistics in Medicine, Psychological Methods, Evaluation Review, and the Journal of Research on Educational Effectiveness, among others. Her work has been supported by the National Science Foundation, the Institute for Education Sciences, the Spencer Foundation, and the Raikes Foundation.
Generalization + Power. In a research study funded by the Institute of Education Sciences (IES), Tipton and Jessaca Spybrook of Western Michigan University are developing a new user-friendly webtool for designing cluster randomized trials in K-12 education with both statistical power and generalizability. After they examine the practices of 54 IES-funded studies performed since 2005, the researchers intend to bridge the gaps between what are usually treated as three separate design considerations: generalizability, power to detect the average treatment effect, and power to detect moderator effects. After extensive testing, the software will be made free and publicly available.
National Study of Learning Mindsets. Tipton is collaborating on the National Study of Learning Mindsets, the largest-ever randomized controlled trial (RCT) of growth mindset interventions in the United States. Tipton contributed to the design of this RCT, which was conducted in a probability sample of high schools throughout the United States. This longitudinal study investigates which kinds of students, in which kinds of classrooms, and which kinds of schools are most likely to benefit from online exercises designed to promote learning mindsets. Additionally, the study provides an opportunity for methodological research on generalizability and the analysis of treatment effect heterogeneity. The study is sponsored by the Mindset Scholars Network, of which Tipton is a founding member.
Dissemination of Methods to Researchers. In work funded by the Spencer Foundation, Tipton and IPR education researcher and statistician Larry Hedges, have developed The Generalizer, a free webtool that guides education researchers through the design and reporting process for K-12 studies. With former IPR graduate research assistant James Pustejovsky, Tipton is working on developing small-sample methods for improved testing using Cluster Robust Variance Estimation (CRVE). For the next three summers, Tipton is co-teaching a one-week workshop on advanced meta-analysis methods (MATI) funded by IES.
Tipton, E., and R. Olsen. (Forthcoming). A review of statistical methods for generalizing from evaluations of educational interventions. Educational Researcher.
Pustejovsky, J., and E. Tipton. (Forthcoming). Small sample methods for cluster-robust variance estimation and hypothesis testing in fixed effects models. Journal of Business and Economic Statistics.
Tipton, E., and L. Peck. 2017. A design-based approach to improve external validity in welfare policy evaluations. Evaluation Review (Special Issue: External Validity 1), 41(4): 326-56.
Tipton, E., and J. Shuster. 2017. A framework for the meta-analysis of Bland-Altman studies based on a limits of agreement approach. Statistics in Medicine, 36(23), 3621-35.
Tipton, E., and J. Pustejovsky. 2015. Small-sample adjustments to multivariate hypothesis tests in robust variance estimation in meta-regression. Journal of Educational and Behavioral Statistics, 40(6): 604-34.
Tipton, E. 2015. Small sample adjustments for robust variance estimation with meta-regression. Psychological Methods, 20(3): 375–93.
Tipton, E. 2014. How generalizable is your experiment? Comparing a sample and population through a generalizability index. Journal of Educational and Behavioral Statistics, 39(6): 478 – 501.
Tipton, E. 2013. Improving generalizations from experiments using propensity score subclassification: Assumptions, properties, and contexts. Journal of Educational and Behavioral Statistics, 38: 239-66.
Uttal, D., N. Meadow, E. Tipton, L. Hand, A. Alden, C. Warren, and N. Newcombe. 2013. The malleability of spatial skills: A meta-analysis of training studies. Psychological Bulletin, 139(2): 352-402.