Training Institute on Randomized Controlled Trials in Education Research. Funded by the U.S. Department of Education's Institute of Education Sciences, this project, now in its fifteenth year, provides an intensive two-week summer institute for research professionals who desire advanced training in the design, conduct, and analysis of large-scale randomized experiments in education.
Methods for Studying Replication in Science. Replication is a central part of the logic and rhetoric of science, but the replicability of research is currently under question in fields as diverse as medicine and psychology. Given the importance of replication, surprisingly little has been written about statistical methods for assessing replication. With the support of the National Science Foundation and the U. S. Institute of Education Sciences (IES), this project seeks to develop more precise definitions of replication and statistical methods for evaluating whether a set of studies has demonstrated replication. It also focuses on development of principles for designing ensembles of studies to assess replicability of research findings.
What have we learned in 20 years of IES Randomized Trials. The U.S. Institute of Education Sciences was founded in 2002 to “expand fundamental knowledge and understanding of education.” One part of the IES program was the encouragement and funding of several hundred large scale randomized trials of education products, interventions, and services. This project seeks to understand what has been learned from 20 years of IES research.
Improving the Generalizability of Evaluation Research. This project, supported by grants from the National Science Foundation and the Spencer Foundation, supports a program of work on methods to formalize subjective notions of generalizability and external validity. This includes theoretical work on the quantification of generalizability concepts in terms of bias and variance of estimates of population average treatment effect. It also includes developing methods for better generalization from existing experiments, case studies of retrospective generalizability, and methods for better planning of education experiments for generalization to policy-relevant populations.
Improving Evaluations of STEM Research and Development. Research and development of education interventions in STEM necessarily involves evaluation, yet it is often challenging to conduct rigorous evaluations with the resources available. This project will develop methods and provide professional development for researchers is using rigorous, but feasible research designs for STEM education research and development.
Effect Sizes in Single Case Designs. Single case designs are used widely in special education and medicine and are often the predominant design used for studying low incidence diseases and disabilities. Evidence from single case designs has been difficult to synthesize and incorporate in meta-analyses, systematic reviews, and evidence databases, because it lacked effect size measures that were comparable to more conventional (between-groups) studies. This project develops effect size measures that are comparable to those for other designs and can be used in syntheses and evidence databases (so-called design comparable effect size measures).
Zejnullahi, R., and L.V. Hedges. 2024. Robust variance estimation in small meta-analysis with the standardized mean difference. Research Synthesis Methods, 15(1):44–60.
Hedges, L.V., E. Tipton, R. Zejnullahi, and K. Diaz. 2023. Effect sizes in ANCOVA and difference-in-differences designs. British Journal of Mathematical and Statistical Psychology 76(2): 259–82.
Hedges, L.V., W. Shadish, and P. Natesan. 2023. Power analysis for single case designs based on standardized mean difference effect sizes: Computations for (AB)k designs with multiple cases. Behavior Research Methods 55: 3494–503.
Batley, P., M. Thamaran, and L.V. Hedges. 2023. ABkPowerCalculator: An app to compute power for balanced (AB)k single case experimental designs. Multivariate Behavioral Research 59(2): 406–10.
Chan, W., and L. Hedges. 2023. Pooling interactions into error terms in multisite experiments. Journal of Educational and Behavioral Statistics 47(6): 639–65.
Sabol, T., D. McCoy, K. Gonzalez, L. Miratrix, L. Hedges, J. Spybrook, and C. Weiland. 2022. Exploring treatment impact heterogeneity across sites: Challenges and opportunities for early childhood researchers. Early Childhood Research Quarterly 58: 14–26.
Brunner, M., L. Keller, S. Stallasch, J. Kretschmann, A. Hasl, F. Preckel, O. Lüdtke, and L. Hedges. 2022. Meta-analyzing individual participant data from studies with complex survey designs: A tutorial on using the two-stage approach for data from educational large-scale assessments. Research Synthesis Methods 1–31.
Natesan Batley, P., and L. Hedges. 2021. Accurate models vs. accurate estimates: A simulation study of Bayesian single-case experimental designs. Behavior Research Methods 53: 1782–98.
Schauer, J., K. Fitzgerald, S. Peko-Spicer, M. Whalen, R. Zejnullahi, and L. Hedges. 2021. An evaluation of statistical methods for aggregate patterns of replication failure. The Annals of Applied Statistics 15(1): 208–29.
Schauer, J., and L. Hedges. 2021. Reconsidering statistical methods for assessing replication. Psychological Methods 26(1): 127–39.
Hedges, L., and J. Schauer. 2021. The design of replication studies. Journal of the Royal Statistical Society. Series A: Statistics in Society 184(3): 868–86.
Schauer, J., and L. Hedges. 2020. Assessing heterogeneity and power in replications of psychological experiments. Psychological Bulletin 146(8): 701–19.
Natesan, P., T. Minka, and L. Hedges. 2020. Investigating immediacy in multiple phase-change single case experimental designs using a variational Bayesian unknown change-points model. Behavior Research Methods 52(4): 1714–28.
Schauer, J., A. Kuyper, E. Hedberg, and L. Hedges. 2020. The effects of microsuppression on state education data quality. Journal of Research on Educational Effectiveness 13(4): 794–815.
Hedges, L. 2019. The statistics of replication. Methodology 15(1): 3–14.
Books and Monographs
Hedges, L. V., M. Chiu, B. Chaney, and N. Kirkendall. 2022. A Vision and Roadmap for Education Statistics. Washington, D.C.: The National Academies Press.
Borenstein, M., L. V. Hedges, J. P. T. Higgins, and H. R. Rothstein. 2021. Introduction to Meta-Analysis. 2nd ed. London: Wiley.
Cooper, H. M., L. V. Hedges, and J. Valentine, eds. 2019. The Handbook of Research Synthesis and Meta-Analysis, 3nd ed. New York: The Russell Sage Foundation.
Hedges, L. V., and B. Schneider, eds. 2005. The Social Organization of Schooling. New York: The Russell Sage Foundation.
Cook, T., H. M. Cooper, D. Cordray, L. V. Hedges, R. J. Light, T. Louis, and F. Mosteller. 1994. Meta-Analysis for Explanation. New York: The Russell Sage Foundation.
Cooper, H. M. and L. V. Hedges, eds. 1993. The Handbook of Research Synthesis. New York: The Russell Sage Foundation.
Draper, D., D. P. Gaver, P. K. Goel, J. B. Greenhouse, L. V. Hedges, C. N. Morris, J. R. Tucker, and C. Waternaux. 1993. Combining Information: Statistical Issues and Opportunities for Research. Washington, D.C.: American Statistical Association.
Hedges, L. V., and I. Olkin. 1985. Statistical Methods for Meta-Analysis. New York: Academic Press.