Skip to main content

Comprehensive OOS Evaluation of Predictive Algorithms with Statistical Decision Theory (WP-24-10)

Jeff Dominitz and Charles F. Manski

Dominitz and Manski argue that comprehensive out-of-sample (OOS) evaluation using statistical decision theory (SDT) should replace the current practice of K-fold and Common Task Framework validation in machine learning (ML) research. SDT provides a formal framework for performing comprehensive OOS evaluation across all possible (1) training samples, (2) populations that may generate training data, and (3) populations of prediction interest. Regarding feature (3), the researchers emphasize that SDT requires the practitioner to directly confront the possibility that the future may not look like the past and to account for a possible need to extrapolate from one population to another when building a predictive algorithm. SDT is simple in abstraction, but it is often computationally demanding to implement. They discuss progress in tractable implementation of SDT when prediction accuracy is measured by mean square error or by misclassification rate. They summarize research studying settings in which the training data will be generated from a subpopulation of the population of prediction interest. They also consider conditional prediction with alternative restrictions on the state space of possible populations that may generate training data. They conclude by calling on ML researchers to join with econometricians and statisticians in expanding the domain within which implementation of SDT is tractable.

Jeff Dominitz, Program Area Director, Behavioral Economic Analysis and Decision-Making, NORC at the University of Chicago

Charles F. Manski, Board of Trustees Professor in Economics and IPR Fellow, Northwestern University

Download PDF