QE Sample Workshop Schedule
This workshop will last five days. The schedule below is a sample taken from a previous year’s workshop.
9:00 a.m. - 12:00 p.m.: Introductions
- Purposes, people, and schedule (1 hour)
- Two concepts of causation—activity and explanatory theories
- Rubin Counterfactual Model and its links to random assignment
- The Pattern Matching Model and its links to multiple implications
- Four types of validity very briefly described
- Validity priorities for this workshop
1:00 p.m. - 4:00 p.m.: Randomized Experiment with Individuals and Clusters
- Getting consent
- Proper and improper randomization
- Systematic attrition
- Treatment contamination and using instrumental variables to model it
- The Theoretically Unnecessary but Pragmatically Necessary Pretest
- Special issues with cluster-level experiments:
- Statistical power and computing sample sizes
- Small sample sizes and unhappy random assignment
- Dealing with treatment misassignment
- Analysis using multi-level models
9:00 a.m. - 12:00 p.m.: Regression Discontinuity: The Basics
- What is the design?
- Why it is unbiased?
- Many examples from education
- Modeling functional form in psychology and in economics
- Fuzzy discontinuities and instrumental variables
- Weighting at the cut-off point
- Statistical power considerations
1:00 p.m. - 4:00p.m.: Regression Discontinuity: Beyond the Basics
- Adding additional design elements, with examples
- Hypothetical example of evaluating No Child Left Behind sanctions
- How to get the design used more often
9:00 a.m. - 12:00 p.m.: Abbreviated ITS Designs and Analysis
- Many reasons for adding more pretest time points
- Design of simple ITS, with education examples
- Identifying usual threats to internal and construct validity
- Dealing with these threats within the simple ITS framework
- Adding design elements to improve interpretation, with examples
- How to analyze the data given non-independent observations
9:00 a.m. - 12:00 p.m.: "Workhorse Design"—Pre/Post with Non-Equivalent Groups
- Illustrating the design and the usual analytic problems with it
- Bad matching and statistical regression
- Better population matching through initial sampling design: Bloom et al. and Aiken et al.
- Knowing and measuring the selection process; and knowing and modeling the outcome: Shadish et al.
- Other local matching techniques for education: Twin, sibling, grade, and cohort control matches
- The general principle is...?
1:00 p.m. - 4:00p.m.: Statistical Analysis of "Work Horse Design"
- Ordinary least squares, specification bias, and errors in pretest
- Heckman-type selection models
- Propensity scores
- Instrumental variables within an experiment vs. a substitute for it
- Level of analysis as a complicating factor—student, class, school
- How interpretation depends on design and measurement as well as analysis
- Designs with better and worse covariate features
8:30 a.m. - 11:30 a.m.: Beyond Statistical Matching as Model for Selection Control—Design Elements to Add to the Basic "Workhorse Design"
- Cohorts as non-equivalent controls, examples
- More pretest waves, examples
- Multiple control groups, examples
- Non-equivalent dependent variables, examples
- Uses (and abuses) of variation in treatment implementation
- Simultaneous use of several such design elements
- This same logic applied to within-study designs
12:30 p.m. - 2:30 p.m.: Common Designs to Avoid and Wrap-Up
- Designs with no pretest, but how can we then do kindergarten studies?
- Limited designs with no control groups
- Proxy pretests and also
- Instrumental variables
- Pattern matching designs with no control groups—Minton as the example
- Generalizing Minton
- Unexamined issues attendees want to discuss
- Keeping in touch
These supplemental papers focus on topics related to the workshop:
Cook, T. D. 2008. 'Waiting for life to arrive': A history of the regression-discontinuity design in psychology, statistics and economics. Journal of Econometrics 142(2): 636-54.
Shadish, W., M. Clark, and P. Steiner. 2007. Can nonrandomized experiments yield accurate answers? A randomized experiment comparing random to nonrandom assignment. Working paper.
Steiner, P., T. D. Cook, W. Shadish, and M. Clark. 2008. The importance of covariate selection in controlling for selection bias in observational studies. Working paper.