Skip to main content

Facilitation of Research Networks & Best Practices

Gun Laws and Crime Rates

manski-sackler.jpg
IPR economist Charles F. Manski lectured at the National Academy of Sciences on uncertainty in policy analysis and scientific research.

How do right-to-carry laws in the United States affect crime rates? Though gun laws have become a source of heated public debate in the wake of mass shootings, little is known about whether such laws deter crime or lead to more of it. Looking at right-to-carry (RTC) gun laws, which allow individuals to carry concealed handguns, IPR economist Charles F. Manski and John Pepper of the University of Virginia find no academic consensus on their effects: Despite dozens of studies using the same datasets, researchers have arrived at very different conclusions. In the Review of Economics and Statistics, Manski and Pepper highlight the role of varying assumptions used in such analyses and explain the importance of discussing how these assumptions affect statistical results. Manski and Pepper conducted their own original analysis of how RTC laws affected crime in Virginia, Maryland, and Illinois, finding the effects vary. Under some assumptions, RTC laws appear to have no effect on crime rates. Under others, RTC laws seem to increase rates for certain crimes, decrease them for some crimes, and have varying effects for others. While the results provide no easy answer, they highlight why researchers using the same data can arrive at such vastly different results and how different assumptions shape findings. Manski is the Board of Trustees Professor in Economics.

One Replication Is Not Enough 

Can the results of an experiment be replicated? As IPR education researcher and statistician Larry Hedges and IPR postdoctoral fellow Jacob Schauer explain in the Journal of Educational and Behavioral Statistics, this apparently simple question does not have a simple answer. They point out that replicating a study is not a straightforward matter of repeating the experiment with the same or even larger sample size as the first, comparing the results of the two, and seeing if they are the “same.” Hedges and Schauer ask whether two studies—the original study and a single replication study—can ever be enough to demonstrate conclusively that a result has or has not been replicated. They uncover that the statistical uncertainty in the comparison between two studies is greater than the uncertainty in each study considered separately. Therefore, a single replication study does not provide solid statistical conclusions. Instead, the authors conclude, researchers need to concentrate on the design of replication studies, notably the ensemble of studies required for replicability. Hedges is the Board of Trustees Professor of Statistics and Social Policy and of Psychology and co-director of the STEPP Center.

Are Best Practices in Meta-Regression Being Used? 

The growing use and significance of research synthesis by means of meta-analysis in education, psychology, and medicine have led IPR statistician Elizabeth Tipton, former IPR graduate research assistant James Pustejovsky, now at the University of Wisconsin-Madison, and Teachers College’s Hedyeh Ahmadi to examine meta-regression methods. Meta-regression, the extension of regression models to the meta-analysis setting, enhances our understanding of different effect sizes in the studies being analyzed and is especially important when large numbers of studies are being synthesized. In the first of two articles, the authors review the historical development of meta-regression methods over the last 40 years and identify five best meta-regression practices on which scholars broadly agree. They examine current instances of meta-regression methodology in four major journals covering psychology, organizational psychology, education, and medicine in a second article. Tipton and her colleagues compare the methodologies in the studies published by the four journals in 2016 to the best practices they had already identified from their historical research. They discover that the five best practices are rarely carried out in practice. Default settings in common meta-regression software cause some of the discrepancies between best and actual practice, they noted, and they recommend improving methodologist-researcher partnerships to help close the gap. Tipton is associate professor of statistics and co-director of the STEPP Center.