Pollsters read survey results about upcoming elections in a very circumscribed sense, as a snapshot in time (as the well-worn yet apt characterization goes), while the rest of the world tends to view them as predictions. In particular, trial ballot questions are often viewed as predictive of what will happen on Election Day. If you read about the latest University of Texas/Texas Tribune Poll last month, you no doubt noticed that some of its “predictions” did not match up with the actual outcomes of Tuesday’s primaries.
So what happened?
The ongoing challenge of public polling is to reconcile popular expectations about what polls “mean” at election time with our own desire to provide the public with information about mass opinion on politics and policy. We begin with the realization that polling results provide an account of public attitudes only at the time the data are collected. However, publicly released polls tend to be taken as a prediction of what will happen on Election Day. As much as we would like this to be the case, and as pleased as we are when the polling results comport with the eventual reality, we don’t, in the end, view the results in this way.
A situation with (a) a lot of unformed or nonexistent opinions of candidates and (b) active campaigning in multicandidate races with no distinguishing party labels in a notoriously low-turnout election was, and is, likely to create volatility in results and uncertainty about the composition of the electorate. This volatility, particularly in the weeks leading up to an election, as voters slowly begin to pay attention, is why campaigns invest in daily tracking polls if they can afford them. As several candidates found out Tuesday, the past, even the relatively recent past, is always an imperfect guide to the present.
In our own polling, to assess the state of the primary elections, we screened “likely voters” from the larger sample of registered voter respondents — people who told us that they intended to vote in a particular party’s primary and, in addition, said that they were “very” or “somewhat” interested in politics and had voted in “every” or “almost every” one of the past few elections. Even among this group, many expressed no candidate preference in a number of races. With the election just around the corner, we forced them to make a decision — asking which candidate would get their vote in each race if “don’t know” was not among the options. In sum, we reported the results for people who seemed to be “likely” primary voters at some distance from the actual primary election. This screen, like any screen, is arbitrary, but has, in the past, been particularly robust and, maybe even more important to us, is purposefully agnostic about the eventual composition of the electorate.
There are several factors that might explain the differences between the election results and our trial ballot numbers in 2014. The seemingly obvious reason is that many of the campaigns, having limited resources, waited until the final few weeks to make the most of their advertising buys, presenting themselves (or their opponents) to the voters, in many cases, for the first time. We intentionally came out of the field just before early voting began to avoid measuring two different things but treating them as one: a reported recollection of one’s early vote and a prospective assessment of one’s likely vote. The problem lies in the fact that the effect of a late surge in campaigning is arguably larger in low-information, low-turnout elections like this one, because most voters form their preferences and eventual vote choices based on very little information and/or a limited set of criteria about the candidates.
The composition of the electorate is another important factor. The best way to know what the electorate will look like is to know what it has looked like in the past, and for Texas primaries we don’t have systematic information of that kind. Exit polling in non-presidential primaries is rare, and the most recent primary exit polls for Texas are from 2008.
As an alternative to the missing exit polling, we can reverse-engineer the likely electorate based on Tuesday’s results and our polling data, as we have in the table below. Focusing our attention on the Republican primaries for lieutenant governor and attorney general — two contests in which our poll showed the eventual runner-up as the leader — we see a GOP primary electorate that was, above all else, very conservative and very much aligned with the Tea Party sentiment that many in the media have begun to dismiss. We expect the primary electorate to be ideologically conservative; the likely voter screen should capture a greater share of these voters while also capturing those who vote regularly for reasons other than ideological commitment — civic-minded voters, party stalwarts and the like.
A look at the results:
|Dewhurst||Staples||Patterson||Patrick||N / (MOE +/-)|
|Among likely voters||37||17||15||31||461 / (4.56%)|
|Among all conservatives||36||15||16||33||420 / (4.78%)|
|Among “somewhat” or “extremely” conservative||36||17||13||34||290 / (5.75%)|
|Among “extremely” conservative||35||15||12||38||100 / (9.8%)|
|Among “extremely” conservative likely voters||33||17||10||39||86 / (10.57%)|
|Among those with an opinion of each lt. gov. candidate||25||14||21||40||123 / (8.84%)|
|Among those with an opinion of Dan Patrick||27||13||16||44||301 / (5.65%)|
|Among Tea Party Republican likely voters||26||17||20||38||169 / (7.54%)|
|Among Ted Cruz 2016 GOP primary voters||36||12||16||36||146 / (8.11%)|
|Among Ted Cruz 2016 GOP primary likely voters||34||13||16||37||132 / (8.53%)|
|Among Ted Cruz 2016 GOP primary voters who view Cruz “extremely” favorably and are likely voters||33||14||14||39||123 / (8.8%)|
|Branch||Smitherman||Paxton||N / (MOE +/-)|
|Among LVs||42||20||38||461 / (4.56%)|
|Among likely voters||45||17||38||409 / (4.85%)|
|Among all conservatives||49||17||34||284 / (5.82%)|
|Among “somewhat” or “extremely” conservative||42||22||36||97 / (9.95%)|
|Among “extremely” conservative||39||23||38||82 / (10.82%)|
|Among “extremely” conservative likely voters||43||26||30||124 / (8.8%)|
|Among those with an opinion of each lt. gov. candidate||48||23||30||294 / (5.72%)|
|Among those with an opinion of Dan Patrick||44||18||38||159 / (7.77%)|
|Among Tea Party Republican likely voters||38||19||43||139 / (8.31%)|
|Among Ted Cruz 2016 GOP primary voters||38||19||43||124 / (8.8%)|
|Among Ted Cruz 2016 GOP primary likely voters||39||17||43||116 / (9.1%)|
The tables show us that the 1.3 million voters who turned out for the Republican primary (only 9.6 percent of registered voters) largely came from the most conservative quarters of the Texas GOP. See, for example, how closely the preferences of those who favored Ted Cruz over more moderate Republicans in the 2016 presidential primary trial heat resembled the final result in the GOP attorney general race.
Additionally, what these tables don’t show is how uninformed and underdeveloped the attitudes of the electorate were in the final weeks of the campaign — an element that was sure to create volatility (that is, broad but potentially uneven changes in preferences that affect the totals for the candidates). Additional data elaborate the point: About a fifth of GOP voters for each of the lieutenant governor candidates did not register either a positive or negative opinion toward their preferred candidate. In addition, roughly half of the potential GOP primary voters surveyed in the attorney general and comptroller races originally stated that they hadn’t thought enough about the race to form an opinion. This is almost certainly why Debra Medina polled so high among people forced to choose: They recognized her name.
The Democratic side of the ledger was even more disheartening for anyone who wants to assume the existence of a large, engaged and informed electorate. U.S. Senate candidate Kesha Rogers’ strong initial polling — driven in large part by African-American respondents who, in the end, didn’t vote — was also buoyed by the roughly three-quarters of our respondents who initially said that they had no opinion in that primary. (As with the Republicans, those who initially chose no one were then asked which way they leaned.)
However the results of UT/TT polls are viewed, our practice is to be as transparent as we possibly can be with our public opinion data, making every survey, codebook, methodology statement, crosstab and data set available to everyone and anyone who wishes to run their own analyses (or simply take a closer look).
The results from this election will have us re-examining our likely voter screen, especially in low-turnout elections. We’ve written about this screen extensively and, in the case of last year’s poll on the constitutional amendment election, about the different potential results based on one’s interpretation of the data and expectation of the electorate. One idea is to release more results based on different likely voter screens, so that anyone looking at a particular poll can decide what they think is the most likely outcome based on their own expectations about the electorate.
As we consider this and other ideas, we’ll continue looking for new ways to leverage the transparency we already practice to more actively engage users in an ongoing discussion of Texas politics.
Correction: A previous version of this post had Ken Paxton's actuals in the attorney general table as 45; the correct number is 44.
This article originally appeared in The Texas Tribune at http://www.texastribune.org/2014/03/06/polling-center-poll-findings-vs-election-results/.