Bruce Spencer

Professor of Statistics


Biography

Bruce Spencer is a statistician whose interests span the disciplines of statistics and public policy with a special focus on the design and evaluation of large-scale statistical data programs. He is currently evaluating whether sampling improves the accuracy of the 2000 Census, working on the accuracy of jury verdicts, and planning for a new center at IPR devoted to research and evaluation of statistical data programs.

A member of the Northwestern faculty since 1980, Spencer chaired its statistics department from 1988 to 1999 and from 2000 to 2001. He directed the Methodology Research Center of the National Opinion Research Center (NORC) at the University of Chicago from 1985 to 1992 and was Senior Research Statistician there from 1992 to 1994. Spencer is a member of the U. S. Steering Committee of the Third International Mathematics and Science Assessment; he was a member of the National Academy of Sciences Mathematical Sciences Assessment Panel (1991–93) and Panel on Statistical Issues in AIDS Research (1988–89) and was Study Director for the Panel on Small Area Estimates of Population and Income (1978–80). Spencer received the Palmer O. Johnson Memorial Award from the American Educational Research Association in 1983 and is an elected Fellow of the American Statistical Association. In December 2006, he was appointed a member of the National Academy of Sciences' panel to review programs of the Bureau of Justice Statistics.

Spencer has participated in evaluations of major statistical programs, including population estimates by the Census Bureau, population forecasts by the Social Security Administration, test score statistics by the Department of Education, and drug abuse estimates by state and local agencies. He has also conducted research into the effects of data error on the allocations of public funding and representation. He has published numerous articles and four books, most recently Statistical Demography and Forecasting, written with Juha Alho (Springer, 2005).

Current Projects

Center for Data Evaluation. This Center, now in the planning stages, will be devoted to advancing the evaluation of statistical data programs. A statistical data program collects, processes, and analyzes data and produces statistics. The census, the gross national product and the national income accounts, Social Security forecasts, international statistics on math and science performance are typical examples. The evaluation of a statistical program will include assessments of both accuracy (how well the program measures what it tries to measure) and validity (how closely the objective of the measurement matches the uses of the statistics). As part of its mission, the Center will address some difficult but fundamental theoretical questions for long-range improvement in statistical programs. For example, economists are interested in how human capital should be measured. Sociologists and policy analysts are interested in how many cultures (as distinct from countries of origin) are represented in the population. With support from The Searle Fund, he is analyzing the accuracy of randomized social experiments, in particular the Head Start Impact Study.

Accuracy of Census 2000.The U.S. Constitution requires that a census be taken to permit equal representation. However, every census has been to some degree in error—the counts are inaccurate. The Census Bureau conducts a sample survey just on the heels of the census head-count to provide estimates of census error for subgroups of the population. The latter estimates can be used to adjust the head-counts. Yet, the adjustments too are inaccurate, a result not only of sampling variability but failure in statistical models as well as typical non-sampling errors such as non-response, errors in data that are reported, and errors in processing the data. Spencer has been working with the Census Bureau to decompose the "total error" into its components, to develop estimates of the component errors, and thus to estimate the total error. The methodology allows for estimates of the error in both the census head-counts and the adjusted counts. Given the estimates of error, which themselves are uncertain, one needs to decide which estimates—head counts or adjusted counts—are more accurate. He has been developing the application of statistical decision theory to this important political and statistical question.

Statistical Demography and Forecasting. With Juha Alho of the University of Joensuu, Finland, Spencer has written a graduate-level book to introduce demographic concepts and techniques from the standpoint of modern statistical theory. This perspective simplifies the presentation and unifies results from single-state and multistate demographic models. The book discusses both the theory and application for the estimation of uncertainty in population estimates, population forecasts, and so-called "functional" population forecasts that depend strongly on the population forecasts. Examples of the latter include forecasts of the size of the labor force, the size of the disabled population, and the financial balance of the Social Security Trust Fund. They are developing software to permit easy but powerful implementation of the methods developed in the book.

Selected Publications

Accuracy of Population Estimates

Bruce, S., with M. Anderson, B. Daponte, S. Fienberg, J. Kadane, and D. Steffey. 2000. Sampling-based adjustment of the 2000 Census–A balanced perspective. Jurimetrics 40: 341–56.

Bruce, S., with M. Mulry. 1993. Accuracy of the 1990 Census and undercount adjustments. Journal of the American Statistical Association 88: 1080–91.

Bruce, S., with M. Mulry. 1991. Total error in PES estimates of population: The dress rehearsal Census of 1988. Journal of the American Statistical Association 86: 839–54, with discussion 855–63.

Accuracy of Population Forecasts

Bruce, S. 1997. The practical specification of the expected error of population forecasts. Journal of Official Statistics 13: 203–26.

Bruce, S. 1993. Education statistics—A study of eligibility exclusions and sampling: 1992 trial state assessment. In The Trial State Assessment: Prospects and Realities. The Third Report of the National Academy of Education Panel on the Evaluation of the NAEP Trial State Assessment: 1992 Trial State Assessment, ed. R. Glaser, R. Linn, and G. Bohrnstedt, 1–68. Stanford: National Academy of Education.

Bruce, S. 1992. A critique of sampling in the 1990 trial state assessment. In Assessing Student Achievement in the States: Background Studies. Studies for the Evaluation of the NAEP Trial State Assessment Commissioned for the National Academy of Education Panel Report on the 1990 Trial, ed. R. Glaser, R. Linn, and G. Bohrnstedt, 1–18. Stanford: National Academy of Education.

Bruce, S. 1992. Eligibility/exclusion issues in the 1990 trial state assessment. In Assessing Student Achievement in the States: Background Studies. Studies for the Evaluation of the NAEP Trial State Assessment Commissioned for the National Academy of Education Panel Report on the 1990 Trial, ed. R. Glaser, R. Linn, and G. Bohrnstedt, 19–49. Stanford: National Academy of Education.

Bruce, S., with J. Alho. 1991. Population forecasts as a database. Journal of Official Statistics 7: 295–310.

Bruce, S., with W. Foran. 1991. Sampling probabilities for aggregations, with applications to NELS:88 and other educational longitudinal surveys. Journal of Educational Statistics 16: 21–34.

Bruce, S., with J. Alho. 1990. Effects of targets and aggregation on the propagation of error in mortality forecasts. Journal of Mathematical Population Studies 2: 209–27.

Bruce, S., with J. Alho. 1990. Error models for official mortality forecasts. Journal of the American Statistical Association 85: 609–16.

Bruce, S., with J. Alho. 1985. Uncertain population forecasting. Journal of the American Statistical Association 80: 306–14.

Bruce, S. 1983. On interpreting test scores as social indicators: Statistical considerations. Journal of Educational Measurement 20: 317–34.

Bruce, S. 1983. Test scores as social statistics: Comparing distributions. Journal of Educational Statistics 8: 249–70.

Bruce, S. 1980. Benefit-Cost Analysis of Data Used to Allocate Funds. New York: Springer-Verlag.

Cost-Benefit Analysis of Statistical Data Programs

Bruce, S. 1994. Sensitivity of benefit-cost analysis of data programs to monotone misspecification. Journal of Statistical Planning and Inference 39(1): 19–31.

Bruce, S., with L. Moses. 1990. Needed data expenditure for an ambiguous decision problem. Journal of the American Statistical Association 85:1099–104.

Bruce, S. 1985. Optimal data quality. Journal of the American Statistical Association 80: 564–73.

Bruce, S. 1982. Feasibility of benefit-cost analysis of data programs. Evaluation Review 6: 649–72.

Data Error and the Allocation of Public Funds and Representation

Bruce, S. 2000. Sampling and weighting an approximate design effect for unequal weighting when measurements may correlate with selection probabilities Survey Methodology 26: 137–38.

Bruce, S., with J. Qian. 1994. Optimally weighted means in stratified sampling. Proceedings of the American Statistical Association, Survey Research. XXVIII: Survey Weighting, 863–66.

Bruce, S., with T. Cohen. 1991. Shrinkage weights for unequal probability samples. Proceedings of the American Statistical Association, Survey Research Section.

Bruce, S., with W. Foran. 1991. Sampling probabilities for aggregations, with applications to NELS:88 and other educational longitudinal surveys. Journal of Educational Statistics 16: 21–34.

Bruce, S. 1985. Statistical aspects of equitable apportionment. Journal of the American Statistical Association 80: 815–22.

Bruce, S. 1985. Avoiding bias in estimates of the effect of data error on allocations of public funds. Evaluation Review 9: 511–18.

Bruce, S. 1982. Technical issues in allocation formula design. Public Administration Review 42: 524–29.

Bruce, S. 1982. Concerning dubious estimates of the effects of census undercount adjustment of federal aid to cities. Urban Affairs Quarterly 18: 145–48.