blackboard pix
link to resources link to students links to gr-qc link to courses link to publications

Uncertainty at the Polls

In the fall of 2000, while the presidency was held in balance, Americans watched a thirty-six day political and legal struggle in which even the definition of a vote changed. In 2001, two studies released their recommendations on what should be done to avoid such crises in the future. One of the recommendations in the report by the National Commission on Federal Election Reform is to reduce the level of error in the count to 2 percent. For comparison, according to the report released by scientists from the Massachusetts Institute of Technology and California Institute of Technology, the error in the 2000 elections was 4 to 6 percent. The cost of updating the nation's voting machines is expected run about $400 million per year.

As dramatic as the fall 2000 elections were, close elections are frequent. Since 1948, nearly half the states have had at least one occasion when the winner of their electoral votes was decided by less than one percent. In this same time period half the states had at least one senatorial race decided by less that 1% of the vote, as reported by the National Commission on Federal Election Reform.

When we go to the polls, we expect that with a fair and accurate count a winner will be clearly discernible. However, a fundamental fact of closely contested races has escaped attention: We cannot accurately determine the outcome of these elections.

By voting, we take a measurement to determine which candidate the voters prefer. Measurements are famously sticky things. There is always a fundamental uncertainty to measurement. It is not possible in practice nor in principle to completely circumvent this indeterminacy. In science this is an everyday phenomenon. In preparing a numerical result, a scientist makes a best estimate of the actual measured value, and an estimate of the uncertainty of the result. This uncertainty, written as a range of possible values, informs the reader how accurate the measurement is estimated to be.

When Einstein published his theory of general relativity in 1916, he predicted that light passing near the Sun would be attracted and its path deflected by a tiny amount. For light that just grazed the Sun the deflection of the path was predicted to be the same angular size as the edge of a chad at 75 feet. In figures, this amounted to 1.8 arc seconds -- a unit which is 1/3600 of a degree. Einstein's result flatly disagreed with two predictions from older theories. Newton's theory of gravity predicted no deflection while a particle theory of light gave a deflection of 0.9 arc seconds, half the value from general relativity. Determining which theory correctly modeled our universe required observing a star when it was aligned with the edge of the Sun and measuring the bending of the light.

In practice, this measurement was technically difficult and was only possible during a solar eclipse. On November 6, 1919, Eddington announced to the Royal Society his best estimate as 2 arc seconds with a range of 1.7 to 2.3 arc seconds. This result was consistent with general relativity. "lights All Askew in the Heavens - Einstein Theory Triumphs," exclaimed a headline in the New York Times. However, the result was highly controversial. In analyzing his data Eddington had thrown out some of the measurements. If all of his team's data was included, the uncertainty was so large that the predictions of both general relativity and the particle theory of light were consistent with the measured value. A more careful analysis showed that the result was inconclusive; either Einstein's theory or the particle theory could have been correct. likewise in the 2000 election, the result could not determine whether Gore or Bush had won.

As we upgrade our voting machines - an important first step - it is wise to keep in mind that errors as large as 2% in the 2000 Florida count or in the popular vote would be far larger than the final lead in the count. Even with upgraded machines, we would not be able to accurately determine the winner.

What does one do with such inconclusive results?

When measuring a quantity there is only one sure way to settle such a situation: measure again. There have been many independent measurements of the deflection of light since Eddington's 1919 expedition. Modern observations have resulted in smaller uncertainties which rule out the older theories and vindicate Einstein's prediction.

In the case of voting, measuring again means another vote. Although close elections are relatively rare, the fact that this problem occurred in the 1824 and 1876 elections, as well as 2000, points to the need for change. Any time that the uncertainty in the count is larger than a candidate's lead, there ought to be a run-off election. It remains far from clear whether it would be possible to enact run-off elections, or an alternate less-fragile voting system. But it is important to be clear that spending $400 million annually to upgrade the voting machines will not prevent the ordeal of the 2000 presidential elections from happening again.

© S. Major 1993-2004 Last modified 11 April 2004

link to Seth's Net Home link to Department of Physics link to archives link to gr-qc link to gr-qc/new link to archive form