Ahlquist, Mayer, and Jackman (2013, p. 3) wrote:
List experiments are a commonly used social scientific tool for measuring the prevalence of illegal or undesirable attributes in a population. In the context of electoral fraud, list experiments have been successfully used in locations as diverse as Lebanon, Russia and Nicaragua. They present our best tool for detecting fraudulent voting in the United States.*
I’m not sure that list experiments are the best tool for detecting fraudulent voting in the United States. But, first, let’s introduce the list experiment.
The list experiment goes back at least to Judith Droitcour Miller’s 1984 dissertation, but she called the procedure the item count method (see page 188 of this 1991 book). Ahlquist, Mayer, and Jackman (2013) reported results from list experiments that split a sample into two groups: members of the first group received a list of 4 items and were instructed to indicate how many of the 4 items applied to themselves; members of the second group received a list of 5 items — the same 4 items that the first group received, plus an additional item — and were instructed to indicate how many of the 5 items applied to themselves. The difference in the mean number of items selected by the groups was then used to estimate the percent of the sample and — for weighted data — the percent of the population to which the fifth item applied.
Ahlquist, Mayer, and Jackman (2013) reported four list experiments from September 2013, with these statements as the fifth item:
- “I cast a ballot under a name that was not my own.”
- “Political candidates or activists offered you money or a gift for your vote.”
- “I read or wrote a text (SMS) message while driving.”
- “I was abducted by extraterrestrials (aliens from another planet).”
Figure 4 of Ahlquist, Mayer, and Jackman (2013) displayed results from three of these list experiments:
My presumption is that vote buying and voter impersonation are low frequency events in the United States: I’d probably guess somewhere between 0 and 1 percent, and closer to 0 percent than to 1 percent. If that’s the case, then a list experiment with 3,000 respondents is not going to detect such low frequency events. 95 percent confidence intervals for weighted estimates in Figure 4 appear to span 20 percentage points or more: the weighted 95 percent confidence interval for vote buying appears to range from -7 percent to 17 percent. Moreover, notice how much estimates varied between the December 2012 and September 2013 waves of the list experiment: the point estimate for voter impersonation in December 2012 was 0 percent, and the point estimate for voter impersonation in September 2013 was -10 percent, a ten-point swing in point estimates.
So, back to the original point, list experiments are not the best tool for detecting vote fraud in the United States because vote fraud in the United States is a low frequency event that list experiments cannot detect without an improbably large sample size: the article indicates that at least 260,000 observations would be necessary to detect a 1% difference.
If that’s the case, then what’s the purpose of a list experiment to detect vote fraud with only 3,000 observations? Ahlquist, Mayer, and Jackman (2013, p. 31) wrote that:
From a policy perspective, our ﬁndings are broadly consistent with the claims made by opponents of stricter voter ID laws: voter impersonation was not a serious problem in the 2012 election.
The implication appears to be that vote fraud is a serious problem only if the fraud is common. But there’s a lot of problems that are serious without being common.
So, if list experiments are not the best tool for detecting vote fraud in the United States, then what is a better way? I think that — if the goal is detecting the presence of vote fraud and not estimating its prevalence — then this is one of those instances in which journalism is better than social science.
* This post was based on the October 30, 2013, version of the Ahlquist, Mayer, and Jackman manuscript, which was located here. A more recent version is located here and has replaced the “best tool” claim about list experiments:
List experiments are a commonly used social scientific tool for measuring the prevalence of illegal or undesirable attributes in a population. In the context of electoral fraud, list experiments have been successfully used in locations as diverse as Lebanon, Russia, and Nicaragua. They present a powerful but unused tool for detecting fraudulent voting in the United States.
It seems that “unused” is applicable, but I’m not sure that a “powerful” tool for detecting vote fraud in the United States would produce 95 percent confidence intervals that span 20 percentage points.
P.S. The figure posted above has also been modified in the revised manuscript. I have a pdf of the October 30, 2013, version, in case you are interested in verifying the quotes and figure.