Skip to main content

Improvements to Experimental Design & Quality

Census Design, Costs, Accuracy

hedges-spencer.jpg
Following a colloquium, IPR statistician Bruce Spencer (right) follows up on a point about his census work with IPR education researcher and statistician Larry Hedges.
As the 2020 U.S. Census approaches, bureau officials must finalize census design, which means determining what operational programs will be used to collect census data. These decisions might include whether to build address lists using in-office technologies or by canvassing in the field, whether to collect data via paper forms or online, and whether to use administrative records and/or third-party data to follow up with people who do not answer—known as non-response follow-up. In making these decisions, the Census Bureau must consider the outputs and accuracy of different types of operational programs. For example, in terms of output, how many housing units designated for non-response follow-up can be classified as vacant based on administrative records, without any need for in-person follow-up? Additionally, in terms of accuracy, what fraction of these housing units will actually be occupied and therefore mistakenly classified as vacant? Since the exact accuracy of each program cannot be known ahead of time, it must instead be forecasted. In a project supported by the National Science Foundation (NSF) and the U.S. Census Bureau, IPR statistician Bruce Spencer  is collaborating with Census Bureau researchers to forecast the accuracy parameters of different census operational programs at both the national and state levels. This will ultimately help specify error distributions for the state population counts. Additionally, in an IPR working paper with former IPR graduate research assistant Zachary Seeskin, now at NORC, the two researchers contrast the costs of attaining accuracy with the consequences of imperfect accuracy for census data. They detail how inaccuracy rates in the 2020 Census have the potential to cause quite large distortions. For instance, an average error of 2 percent for state populations could result in expected federal funding shifts of more than $50 billion over 10 years and expected shifts in the apportionment of as many as seven House seats.

 

How to Measure Inequality in Small-Group Discussion

In any group working together, such as a jury, some people talk more than others. This inequality may promote efficiency, but sometimes means that some people, or certain kinds of people, have been ignored. Court opinions on jury size have discussed inequality in talk, with some scholars telling the courts that smaller groups, while offering less diversity in membership, are more egalitarian than larger ones. Is this true? Are smaller juries “better"? In a recent article, law professor, psychologist, and IPR associate Shari Seidman Diamond and her colleagues Mary R. Rose and Dan Powers question this conclusion because, the authors note, there are problems with measuring inequality in small groups. They apply three commonly used metrics to juries to evaluate which is most useful for comparing inequality in small groups. Using four highly realistic datasets from juries that deliberated, either in real trials or in experiments, the researchers tested the measures by counting each time someone new speaks and the number of words spoken by each juror. Diamond and her co-authors find that all three tested measures of inequality correlate with the number of words and turns of speech, but some measures falsely portray small groups as more egalitarian than they are. The authors show that a metric known as the index of concentration is the most useful in demonstrating the level of equality across small groups of differing sizes, and they urge more research into applying it. From a policy perspective, this study helps to shed light on what the best size for an equitable jury might be. Diamond is the Howard J. Trienens Professor of Law.


“Thin-Sliced” Child Personality Assessment 

One way to accurately describe children’s personalities is to have strangers observe the children’s behavior in the lab for short periods, or “thin slices.” In a recent study published in Psychological Assessment, psychologist and IPR associate Jennifer Tackett and her colleagues examine a statistical model that integrates this type of observational data from multiple observers and multiple situations in which the children were observed. The model, called the correlated traits, correlated methods model (CTCM), was employed using data from a sample of 326 children aged 9–10 years. The researchers find that the personality traits identified using the CTCM model align with traditional child personality assessments performed by parents, and even provide more information than the parental questionnaires, thus demonstrating that CTCM is reliable and valid. When the CTCM model is applied to data gathered by the thin-slice method, investigators may gain valuable understanding of childhood personality. Tackett and her colleagues include online access to detailed materials used in their study to encourage others to employ them.