Here’s another edition of “there is in fact good and nontrivial scholarship in modern historical journals” (we need a catchier name for this series; previously: 1, 2). Today’s installment addresses the question implicit in this post title: how wild was the West?
Randolph A. Roth, “Guns, Murder, and Probability: How Can We Decide Which Figures to Trust?” Reviews in American History 35, no. 2 (2007): 165-175. Accessed 8/20/09, here.
SOME NONTRIVIAL QUESTIONS RAISED
Was homicide really more common in the American West than elsewhere? How can we know?
This is a long-vexed issue in western historiography. It’s a normal point of New Western history that the West was a brutal, violent place—the arena of conquest and so forth. It’s probably not actually necessary for the historiography to function for the West actually to have been brutal; it just has to be assumed to be brutal and believed to be brutal—image is quite as important as reality in the history of the American West—but naturally a number of historians have fixed on the question of fact. Clare Vernon McKanna and David Peterson Del Mar say the West was violent; Robert R. Dykstra says it wasn’t.1
Here Roth picks up the thread. McKanna and others calculate western homicide rates by the modern statistical convention of quoting murders per 100,000 persons. But, Dykstra objects, a lot of these western towns had very few persons in them. One murder sends the annual homicide rate sky-high. Suppose that a drunk gunfighter had missed his mark and only wounded his prey: then you’d have a peaceable murder rate of zero. This is silly, right?
Well, if that were the actual character of these studies, maybe. But it’s not, Roth notes. McKanna and Del Mar have taken a number of counties over time, thus significantly increasing the sample size.
Peterson Del Mar found evidence in newspapers of at least 114 homicides in Oregon from 1850 to 1865, when the average adult population was 23,373, so the homicide rate in Oregon was at least 30 per 100,000 adults per year.16 If we assume that the inhabitants of Oregon were “typical” of residents of the Pacific Northwest in the 1850s and early 1860s—and hence selected “randomly,” for all intents and purposes, from the region’s inhabitants—we can arrive at a rough estimate of the region’s homicide rate. That is a reasonable assumption, because a majority of the region’s population lived in Oregon and because Oregon’s diverse communities were typical of the region as a whole. The formula for a 99% confidence interval for a random sample with over 100 cases is
π = p ± √2.58(p*(1-p))/n
Here, “π” stands for the real, but unknown homicide rate; “p” stands for the ratio of the number of homicides to the number of persons at risk in Oregon, 1850–65—114 divided by 373,964 (the average adult population times 16 years); “n” stands for the number of persons at risk—again, for Oregon, 373,964. If we assume that the population of Oregon was representative of the population of the Pacific Northwest, we would estimate that there is a 99% chance that the homicide rate in the Pacific Northwest was between 23 per 100,000 adults per year and 38 per 100,000 adults per year: a relatively narrow and high range.
That estimate, Roth points out, accords with the rate calculated by the mortality census of 1860, which put the murder rate at 24 per 100,000. Nineteenth-century Oregon was a violent place. And what held for Oregon held also for Washington, for British Columbia, and in McKanna’s studies, for California. Roth: “… the interval for all of southern and central California was between 60 and 70 per 100,000 adults per year—seven times the homicide rate in the United States today … An adult exposed to that rate for sixteen years stood a 1 in 96 chance of being murdered, and an adult exposed to that rate for 45 years would have stood a 1 in 34 chance of being murdered.”
But that is not the entirety of Roth’s article; he goes on to address the question raised by the Bellesiles episode and argues that even without knowing Bellesiles’ methods, just comparing his study with others, “The chance that Bellesiles is right is still nil—less than 1 in a zillion. Thanks to these statistics, we can reject Bellesiles’s findings with confidence. We do not have to examine his methods or his selection of counties. His estimates of gun ownership are impossibly low.”
This is the rare article where one really would have liked more. Roth contends,
It is important, however, that scholars be able to recognize works such as theirs as statistically reliable. Skepticism is fine, as is enthusiasm for studies that tell us unexpected things about the past. But if scholars are unable to distinguish good quantitative work from bad, they can make serious mistakes.
It would have been useful to know a bit more about why, in Roth’s view, historians so often fail to make adequate assessments of probabilities. And it would have been invaluable to have a systematic discussion of what kinds of diagnostic tools all historians—and reviewers of historical works—should have to hand.
1Richard Maxwell Brown puts an argument about why people lined up on different sides in the violence that there may or may not have been in the West.