2015 Summer Institute Required Reading

2015 IES/NCER Summer Research Training Institute

Instructional Sessions and Readings

Sessions 1–2 | Session 3 | Session 4 | Sessions 5–7 | Sessions 8–9 | Session 10 | Sessions 11–12 | Sessions 13–14 | Sessions 15–16 Session 17 | Sessions 18–19 | Sessions 20–21 | Session 22

Session 1: Specifying conceptual and operational models; formulating questions

Session 2: Describing and quantifying outcomes

Instructor: Mark Lipsey

Baron, R., and Kenny, D.A. (1986). The moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51(6), 1173–1182. (Also listed for Sessions 18 and 20)

Clements, D.H. (2007). Curriculum research: Toward a framework for research-based curricula. Journal for Research in Mathematics Education, 38(1), 35–70.

Boruch, R.F., Victor, T., and Cecil, J.S. (2000). Resolving ethical and legal problems in randomized experiments. Crime and Delinquency, 46(3), 330–353.

Boruch, R.F. (2007). Encouraging the flight of error: Ethical standards, evidence standards, and randomized trials. New Directions for Evaluation, 113, 55–73.

Session 3: Assessing the cause

Instructor: David Cordray

Hulleman, C. S., and Cordray, D. S. (2009). Moving from the lab to the field: The role of fidelity and achieved relative intervention strength. Journal of Research on Educational Effectiveness, 2(1), 88–110.

Cordray, D.S., and Pion, G.M. (2006). Treatment strength and integrity: Models and methods. In R.R. Bootzin and P.E. McKnight (eds.), Strengthening research methodology: Psychological Measurement and Evaluation (pp. 103–124). Washington, D.C.: American Psychological Association.

Session 4: Introduction to the group projects

Instructors: Spyros Konstantopoulos and Chris Rhoads

National Center for Education Research, FY2015 Request for Applications.

Website: http://ies.ed.gov/funding/ncer_progs.asp

Sessions 5–7: Basic experimental design for education studies

Instructor: Spyros Konstantopoulos

Kirk, R.E. (1995). Chapter 7: Randomized block designs. Chapter 11: Hierarchical designs. In Experimental Design: Procedures for the Behavioral Sciences (3rd ed). Pacific Grove, Calif.: Brooks Cole. 

Raudenbush, S.W. (1993). Hierarchical linear models and experimental design. In L.K. Edwards (ed.) Applied Analysis of Variance in Behavioral Science (pp. 459–496). New York: Marcel Dekker, Inc.

Hedges, L.V., and Hedberg, E.C. (2007). Intraclass correlations for planning group randomized experiments in education. Educational Evaluation and Policy Analysis, 29, 60–87.

Xu, Z., and Nichols, A. (2010). New estimates of design parameters for clustered randomization studies. Center for Analysis of Longitudinal Data in Education Research, Working Paper 43.

Rhoads, C.H. (2011). The implications of “contamination” for experimental design in education research. Journal of Educational and Behavioral Statistics, 36(1), 76–104.

Session 8–9: Analysis lab

Instructors: Beth Tipton

Peugh, J., and Enders, C. (2005). Using the SPSS Mixed Procedure to fit cross-sectional and longitudinal multilevel models. Educational and Psychological Measurement, 65, 717–741.

Singer, J.D. (1998). Using SAS PROC MIXED to fit multilevel models, hierarchical models, and individual growth models. Journal of Educational and Behavioral Statistics, 25, 323–355.

Suggested Reading:

Bloom, H.S. (2005). Randomizing groups to evaluate place-based programs. In Howard S. Bloom (ed.) Learning More from Social Experiments: Evolving Analytic Approaches, pp. 115–172. New York: Russell Sage Foundations. (Also listed for Sessions 5–7).

Raudenbush, S.W. (1997). Statistical analysis and optimal design for cluster randomized trials. Psychological Methods, 2(2), 173–185.

Shek, D., and Ma, C. (2011) Longitudinal data analysis using linear mixed models in SPSS: Concepts, procedures and illustrations. The Scientific World Journal, 42–76.

Session 10: External validity

Instructors: Beth Tipton and Larry Hedges

Bloom, H.S., and Michalopoulos, C. (2010). When is the story in the subgroups? Strategies for interpreting and reporting intervention effects on subgroups. MDRC Working Paper on Research Methodology, April.

Bloom, H.S., Hill, C.J., Black, A.R., and Lipsey, M.W. (2008). Performance trajectories and performance gaps as achievement effect-size benchmarks for educational interventionsJournal of Research on Educational Effectiveness, 1(4), 289–328.

Hedges, L. V. (2013). Recommendations for practice: justifying claims of generalizability. Educational Psychology Review, 25(3), 331-337.

Schochet, P. Z., Puma, M., & Deke, J. (2014). Understanding Variation in Treatment Effects in Education Impact Evaluations: An Overview of Quantitative Methods. NCEE 2014-4017. National Center for Education Evaluation and Regional Assistance.

Tipton, E. (2013). Stratified sampling using cluster analysis: A sample selection strategy for improved generalizations from experimentsEvaluation Review, 37(2), 109–139.

Tipton, E. (2014). How Generalizable Is Your Experiment? An Index for Comparing Experimental Samples and Populations. Journal of Educational and Behavioral Statistics, 39(6), 478-501.

Tipton, E., Hedges, L.V., Vaden-Kiernan, M., Borman, G., Sullivan, K., and Caverly, S. (2014). Sample selection in randomized experiments: A new method using propensity score stratified samplingJournal of Research on Educational Effectiveness, 7(1), 114–135. 

Sessions 11–12: Sample size and statistical power

Instructor: Larry Hedges

Hedges, L. V. & Hedberg, E. C. (2013). Intraclass correlations and covariate outcome correlations for planning 2 and 3 level cluster randomized experiments in education. Evaluation Review, 37, 13-57.

Hedberg, E. C. & Hedges, L. V. (2014). Reference values of within-district intraclass correlations of academic achievement by district characteristics: Results from a meta-analysis of district-specific data. Evaluation Review, 38, 546-582. 

Spybrook, J., Hedges, L. V., & Borenstein, M. (2014). Understanding statistical power in cluster randomized trials: Challenges posed by differences in notation and terminology, Journal of Research on Educational Effectiveness, 7, 384-406.

Hedges, L. V. & Borenstein, M. (2014). Constrained optimal design in three and four level experiments. Journal of Educational and Behavioral Statistics, 39, 257-281.

Sessions 13–14: Statistical power analysis lab

Instructor: Jessaca Spybrook

**Note: Download the Optimal Design Power Analysis Program from: http://wtgrantfoundation.org/FocusAreas#tools-for-group-randomized-trials

Raudenbush, S.W., Martinez, A., and Spybrook, J. (2007). Strategies for improving precision in group-randomized experiments. Educational Evaluation and Policy Analysis, 29(1), 5–29.

Spybrook, J., Raudenbush, S. W., Congdon, R., and Martinez, A. (2011). Optimal design for longitudinal and multilevel research: Documentation for the “Optimal Design” software.

Konstantopoulos, S. (2009). Incorporating cost in power analysis for three-level cluster-randomized designs. Evaluation Review, 33(4), 335–357.

Konstantopoulos, S. (2009). Using power tables to compute statistical power in multilevel experimental designs. Practical Assessment, Research & Evaluation, 14(10), 1–9.

Additional reading:

Spybrook, J., Hedges, L., and Borenstein, M. (forthcoming 2014). Understanding statistical power in cluster randomized trials: Challenges posted by differences in notation and terminology. Journal of Research on Educational Effectiveness.

Spybrook, J. (2014). Detecting intervention effects across context: An examination of the power of cluster-randomized trials. Journal of Experimental Education, 82(3).

Kelcey, B., and Phelps, G. (2013). Considerations for designing group randomized trials of professional development with teacher knowledge outcomes. Educational Evaluation and Policy Analysis, 35(3), 370–390.

Three articles in special issue of Evaluation Review, 37(6). Available online.

Sessions 15–16: School recruitment

Instructor: Carol Connor

Coburn, C. E., Penuel, W. R., & Geil, K. E. (2013). Research-practice partnerships: A strategy for leveraging research for educational improvement in school districts. William T. Grant Foundation, New York, NY.

Session 17: Growth modeling

Instructor: Chris Rhoads

Bryk, A.S., and Raudenbush, S.W. (1988). Toward a more appropriate conception of school effects. American Journal of Education, 97, 65–108.

Burchinal, M., and Appelbaum, M.I. (1991). Estimating individual developmental functions: Methods and their assumptions. Child Development, 62, 23–43.

Suggested Reading:

Raudenbush, S.W. (2001). Toward a coherent framework for comparing trajectories of individual change. In L. Collins and A. Sayer (eds.), Best Methods for Studying Change (pp. 33–64). Washington, DC: American Psychological Association.

Sessions 18–19: Moderator analysis

Instructor: Spyros Konstantopoulos

Baron R., and Kenny D.A. (1986). The moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51(6), 1173–1182. (Also listed for Sessions 1 & 20).

Rubin, D.B. (1977). Assignment to treatment group on the basis of a covariate. Journal of Educational Statistics, 2(1), 1–26.

Sessions 20–21: Mediation models

Instructor: Laura Stapleton

Baron, R., and Kenny, D.A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51(6), 1173–1182. 

Krull, J.L., and MacKinnon, D.P. (2001). Multilevel modeling of individual and group level mediated effects. Multivariate Behavioral Research, 36(2), 249–77.

MacKinnon, D.P., and Fairchild, A.J. (2009). Current directions in mediation analysis. Current Directions in Psychological Science, 18(1), 16–20.

Suggested Reading:

Bauer, D.J., Preacher, K.J., and Gil, K.M. (2006). Conceptualizing and testing random indirect effects and moderated mediation in multilevel models: New procedures and recommendations. Psychological Methods, 11(2), 142–63.

MacKinnon, D.P., Fritz, M.S., Williams, J., and Lockwood, C.M. (2007). Distribution of the product confidence limits for the indirect effect: Program PRODCLIN. Behavior Research Methods, 39(3), 384–89.

Pituch, K.A., Tate, R.L., and Murphy, D.L. (2010). Three-level models for indirect effects in school- and class-randomized experiments in education. Journal of Experimental Education, 78(1), 60–95.

Session 22: Reporting trials

Instructor: Larry Hedges

Campbell, M. K. et al. (2012). Consort 2010 statement: Extension to cluster randomized trials. British Medical Journal, 345, e5661. (doi: 10.1136/bmj.e5661)

CONSORT Extension for Cluster Trials Checklist