Improvements to Experimental Design and Quality
How to Measure Inequality in Small-Group Discussion
In any group working together, such as a jury, some people talk more than others. This inequality may promote efficiency, but sometimes means that some people, or certain kinds of people, have been ignored. Court opinions on jury size have discussed inequality in talk, with some scholars telling the courts that smaller groups, while offering less diversity in membership, are more egalitarian than larger ones. Is this true? Are smaller juries “better"? In a recent article, law professor, psychologist, and IPR associate Shari Seidman Diamond and her colleagues Mary R. Rose and Dan Powers question this conclusion because, the authors note, there are problems with measuring inequality in small groups. They apply three commonly used metrics to juries to evaluate which is most useful for comparing inequality in small groups. Using four highly realistic datasets from juries that deliberated, either in real trials or in experiments, the researchers tested the measures by counting each time someone new speaks and the number of words spoken by each juror. Diamond and her co-authors find that all three tested measures of inequality correlate with the number of words and turns of speech, but some measures falsely portray small groups as more egalitarian than they are. The authors show that a metric known as the index of concentration is the most useful in demonstrating the level of equality across small groups of differing sizes, and they urge more research into applying it. From a policy perspective, this study helps to shed light on what the best size for an equitable jury might be. Diamond is the Howard J. Trienens Professor of Law.
Perspectives From Transgender and Gender Diverse People on Asking About Gender
Research has increasingly focused on the inclusive measurement of transgender and gender diverse (TGD) people's gender identities, yet gaps still exist in understanding how these individuals prefer to be asked their gender in academic studies. In a study published in LGBT Health, professor of medical social sciences and IPR associate Brian Mustanski and his colleagues examine how TGD people desire their gender to be asked about and represented in research. In an online survey conducted between 2015 and 2017, the researchers asked 695 TGD people to provide written suggestions for how to ask about gender, and 314 gave suggestions. The participants were primarily White (75.7%) and between 16–73 years old. Three broad categories of responses emerged, including specific identities to include in response options, specific questions to ask about gender, and qualifiers or nuanced considerations, such as the option to check multiple boxes or offer a fill-in-the-blank question. Some participants also suggested a two-step method for asking about gender, such as asking the sex assigned at birth and then current gender, while others suggested there should be a question about gender followed by a specific question about whether participants were TGD. The researchers write that improving questions about gender is an important step to increasing accurate representation of TGD people in research, and future research is needed to continue evaluating these suggestions.
Better Designs for Replication Studies
How to check research results in medicine, psychology, education, behavioral economics, and other fields by replicating the experiments in the original studies has been a significant issue for close to 20 years. Current standards for research replication include conducting multiple studies, rather than a single study, to see if the results of the original research will be reproduced. In the Journal of the Royal Statistical Society: Series A, IPR education researcher and statistician Larry Hedges and former IPR postdoctoral fellow Jacob Schauer, now at Northwestern’s Feinberg School of Medicine, turn to how to design groups of studies that investigate replication. They ask how many studies and how many subjects per study are necessary to ensure statistically sound results that are cost-efficient and use a meta-analytic tool in their approach. The authors argue that “approximate replication” is valuable and more attainable than “exact replication,” which may be too strict a measure, and propose methods that can be used to create optimal designs that have sufficient statistical power for approximate replication. They describe the design for two forms of studies—hypothesis tests and variance component estimation—and then evaluate them by testing a current research design for replication used by the Many Labs Project, which examined a psychology study on the “retrospective gambler’s fallacy.” Hedges and Schauer conclude that the Project’s design had sufficient power to identify some larger differences between the studies being analyzed, but that other designs would have been less expensive and/or produced more precise estimates or higher-powered hypothesis tests. Hedges is Board of Trustees Professor of Statistics and Education and Social Policy.