The statistical calculations used for counting votes

Some might be surprised to hear that “Counting lots of ballots [in elections] with absolute precision is impossible.” Wired takes a brief look at how the vote totals are calculated:

Most laws leave the determination of the recount threshold to the discretion of registrars. But not California—at least not since earlier this year, when the state assembly passed a bill piloting a new method to make sure the vote isn’t rocking a little too hard. The formula comes from UC Berkeley statistician Philip Stark; he uses the error rate from audited precincts to calculate a key statistical number called the P-value. Election auditors already calculate the number of errors in any given precinct; the P-value helps them determine whether that error rate means the results are wrong. A low P-value means everything is copacetic: The purported winner is probably the one who indeed got the most votes. If you get a high value? Maybe hold off on those balloon drops.

A p-value is a key measure in most statistical analysis – it provides a measure of how much error is in the data and whether the obtained results are just by chance or whether we can be fairly sure (95% or more) the statistical estimation represents the whole population.

So what is the acceptable p-value for elections in California?

I would be curious to know whether people might seize upon this information for two reasons: (1) it shows the political system is not exact and therefore, possibly corrupt and (2) they distrust statistics altogether.