Thompson’s Upset in Harrisburg Mayor’s Race: Was the Polling Wrong?
So Harrisburg City Council President Linda Thompson beat Mayor Steve Reed by more than 1,000 votes in the recent Democratic Primary Election on Tuesday, May 19th. This upset of Reed, who was affectionately referred to as “mayor for life” by many Harrisburg residents, has huge ramifications for anyone living, working or raising a family in the city. But a recent SP&R poll of likely Democratic voters taken two weeks before the primary election for ABC27 News showed Mayor Reed leading Thompson by 15 points – or a margin of 45 to 29 percent with Les Ford trailing at 8 percent (19% were undecided). So what happened? Was the polling flawed?
The answer is yes and no, really. Polling in primary elections is very different than in general elections. In primary elections, since turnout is usually lower knowing who precisely to poll is the real challenge unlike in a general election when turnout is usually higher among voters of all major political parties and therefore is less of a factor. However, in primary elections whichever candidate does a better job of getting out their base of supporters can really fly under the radar even in polls of “likely” voters. This explains why pre-primary polling in the Harrisburg mayor’s race did not necessarily match the official results on Election Day.
In the case of the Harrisburg mayoral election, the Thompson upset is a classic example of how the polling universe we selected to call for the poll really impacted the accuracy of the results. Notwithstanding, our pre-election analysis of the ABC27 poll clearly laid out Thompson’s recipe for an upset despite Reed’s 15-point margin. In this analysis, posted both on our website and available to our Premium Access Members prior to Election Day, we wrote that Reed was winning comfortably with the traditional “super voters” in the poll, or those who cast ballots in the most recent similar-type election years (those being 2007, 2006 and in 2005 when Reed last faced a primary challenge). This was our main universe for the calls that were made, and our polling was conducted almost exclusively with these “super voters” who are usually the most reliable indicator for predicting election outcomes. However, our poll also surveyed a very small sub group of voters who cast ballots for the first time in the Democratic Presidential Primary Election on April 22, 2008 between Hillary Clinton and Barack Obama. Among this sub group of respondents, most of who were black, Thompson was beating the mayor. This led us to conclude in our analysis that quote: “If Linda Thompson can increase turnout on Election Day among infrequent voters, and in particular approximately 2,500 black voters who cast ballots for the first time in the ’08 Democratic presidential primary between Obama and Clinton, she has a chance to pull off an upset.” We went on to say quote: “…therefore, the lower the turnout, the better Reed will do; the higher the turnout, the better Thompson’s chances”. The end result was that our pre-election analysis laying out Thompson’s strategy for an upset was right on the money, because nearly 6,500 voters cast ballots in the recent primary election, an increase of more than forty percent since the last time Mayor Reed faced primary opposition in 2005 when approximately 4,500 voted. Simply put, Thompson’s Get-Out-The-Vote (or GOTV) operation was superior to Reed’s, and she was able to gin up turnout throughout the city partly by getting more of these “first-time” voters to the polls.
So, the key question is why didn’t we sample more of these “presidential-type” or first-time voters? The answer is that pollsters have to make on-the-spot, educated decisions about who precisely to poll based on a host of issues, many of which are based on limited or incomplete information. Things like the candidates’ fundraising abilities, how much paid media the candidates are doing, the grassroots or GOTV efforts of the candidates, and sheer interest in the race. In our estimation which ultimately proved incorrect, we had little reason to believe that these new “presidential-type” voters who cast ballots for the first time in 2008 were likely to vote in this primary election. They simply didn’t have past primary vote history in these types of off-year municipal elections, and therefore were not a big part of the polling universe we used.
Rather, we stated publicly that the burden was on Thompson to turn them out, and she did! In hindsight, had we polled more of them our poll would probably have shown a much closer race. But had they not turned out, Reed could have won in a landslide which would have made the poll look like the race was closer than it actually was. Plus, taking into consideration the fact that an unusually high number of people were undecided only two weeks out (at 19% citywide), we stated in our analysis that the undecided vote could ultimately break in favor of Thompson if she was able to mobilize her supporters particularly in parts of the city where the poll showed the sentiment for “change” was highest. So, taking all this into account it is not a stretch to make the case that Mayor Reed could poll at 45 percent support in a pre-election poll and only get 39 percent on Election Day. This is particularly true when you take into account the poll’s five percent margin of error and the nineteen percent undecided.
So the lesson learned is that the accuracy of polling is largely tied to the universe of people being surveyed, or the turnout model we are basing the interviews on. Therefore, when you are evaluating the accuracy of polls, some of the key questions to ask should be things like “which types of voters did they survey?” And, “what kind of turnout is the pollster expecting”, particularly if it’s a primary election? Answers to these questions cut to the very heart of the “science” behind our polling, and in future polling analyses we will continue to address and explain these issues to the best of our abilities both before and after Election Day.