For as much criticism as pollsters endured in the run-up to Election Day, a look back shows many of them hit very close to the bulls eye for the presidential race – but some did better than others.
Take the venerable Gallup. It had Mitt Romney at 49 percent and President Obama at 48 percent in a poll published Monday, a day before the voting. And when undecided voters were split up among candidates, Gallup put the figure at 50 percent Romney, 49 percent Obama.
The actual outcome: Obama 50 percent; Romney 48 percent – within Gallup’s margin of error of plus or minus 2 percentage points.
“Our final estimate of the national popular vote was actually fairly accurate,” says Frank Newport, Gallup’s editor-in-chief. “It was within a point or two of what the [candidates] got. In a broad sense, it was pretty close.”
Peter Enns, a professor at Cornell University’s Institute for the Social Science, agrees that most of the polls fell within the margin of error.
The biggest challenge for pollsters, and the one that accounts for most of the variation among them, is figuring out who will show up on Election Day to vote, Enns says.
It’s the difference between registered voters and likely voters.
“About a week ago, Gallup had Romney up by 5 percent – and that’s substantial if the margin of error is plus or minus 3 percentage points – but among registered voters, they had Romney up by only a percent and that’s a statistical tie,” Enns says.
Gallup’s Newport acknowledges that the Obama campaign’s “ground game” was so good that – to some extent – it upset conventional polling theory.
Usually, the question is how many people who say they will vote actually do so. “But this time around … people might tell a pollster they were not planning to vote and then do the opposite on Election Day,” Newport said.
John Hudak, a fellow in governance studies at the Brookings Institution, says the big turnout among Latinos this year could have thrown some pollsters a curve, especially the ones that aren’t flexible enough to conduct their face-to-face or telephone interviews in Spanish.
“In this election cycle in particular, where President Obama won over 70 percent of the Latino vote, if you’re not sampling Latinos in Spanish in states like Florida, New Mexico or Colorado, you’re not capturing that entire demographic,” Hudak says.
There are some other possible factors that could affect getting a good sample of likely voters. Whether a pollster calls any (or enough) cell phones could be crucial, though the issue is controversial in public opinion polling circles.
The theory is that people who use cell phones only (no landline) tend to be Democrats, so if you’re not reaching those folks, you’re undercounting the blue column. But, cell phones have their own problems. With a built in caller I.D., fewer people are likely to answer a call they don’t recognize – like one from a pollster.
“Some say it’s important, others that it’s not. That’s an open question,” says Hudak.
There’s another way to zero in on the perfect numbers. Cue statistician Nate Silver, who blogs for The New York Times. His forecasts this election year were deadly accurate.
He called all 50 states correctly and was within a few tenths of percent on the popular vote, beating his own extraordinary record of missing just one state in 2008.
Silver uses lots of individual polls and then runs them through a weighted formula. “Each of the polls is essentially a best guess with some error around it,” Enns says. “If you combine across all the polls, some are going to be wrong in one direction and some are going to be wrong in the other direction, and that should get you closer to an accurate result.”
So, that’s the first thing. The second is that Silver relies on polls conducted on a state-by-state basis instead of a national sample, Hudak says. The state polls provide a better perspective because they are closer to ground level.
Enns says Silver and the others who weighted the aggregated polls provided very good information this election.
“Where errors were made is when one particular poll was overemphasized, and all the weight was placed on that,” Enns says.
“So, if you get someone quoting the Gallup poll, essentially they are weighting that poll as one and all other information as zero,” he says. “You might be right, but you are much more likely to be wrong then if you take the information from all polls together.”