The rollout of the first set of results from the February 2013 University of Texas/Texas Tribune Poll was accompanied by a brief summary of some methodological tweaks to our standard practices. One of those changes that we think is particularly important is the practice of including a second estimated margin of error sensitive to the process by which our results are weighted. With the attention paid to the results of the poll and the larger discussion going on in the political media about polling practices, we thought a follow-up on the broader context that informed this change might interest readers who follow polling in the state.
When we started the UT statewide poll in Texas, before the poll evolved into our partnership with The Texas Tribune, we thought of it as an effort to use cutting-edge techniques, in this case the internet, to leap ahead of the difficulties of phone polling. It was, frankly, an experimental exercise. While others have had to deal with the increasing difficulty and cost of conducting surveys over the phone, we have had what appears to be a far less difficult time providing accurate, reliable results. This is not to declare phone polling dead by any stretch of the imagination, nor to say that we haven’t had any difficulties. Some populations are hard to reach or are simply small subsets of those you do, no matter the mode of contact. This is the problem that has shaped our decision to increase our sample size, informs our continued weighting of the data to match the population (something that every pollster needs to do), and has moved us to provide a more subtle estimate of the margin of error in addition to the classic standard calculation that is almost universally reported on its own elsewhere.
While the 2012 election was a watershed for attention to polls, this attention produced mixed results in the overall quality of the public discussion of polling. Explorations of methodological and technical issues that have traditionally only interested academics, practitioners and the nerdiest of the nerdy have expanded into a larger niche market for political junkies. But as the numerous flawed predictions about the outcome of the 2012 presidential election illustrated, the uptick of interest in polls hasn’t guaranteed better results, better analyses or a more insightful public discourse. (The best example of how more discussion from a seemingly “sophisticated” use of the numbers can generate a flood of media coverage only to be revealed as an exercise in self-aggrandizement is the “unskewed polls” phenomenon that swept through campaign coverage last fall.)
It’s not coincidental that the increased interest in polling comes at a time when polling has become an increasingly thorny enterprise. These difficulties have fueled academic and professional dialogues as well as variability in results, both of which have been picked up in public discussions about polling.
The fundamental driver here is the fact that changes in communication technology have made producing reliable political polling more difficult. People have changed their communication routines and continue to do so at a rapid pace. These changes have been discussed in depth in many other places, but the summary is quite simple: household landlines have been largely supplanted by a wide range of mobile devices linked by wireless networks connected to the internet. These new means of communication have disrupted some of the previously existing assumptions used to reach sample populations in survey research of all types. Amid this change, residents in the United States have not yet settled into stable use patterns, making it difficult to draw consistently good samples. In the meantime, the profession continues to juggle different means of reaching respondents through the thicket of landlines, cell phones, desktop computer, pads and various hybrids of all of the above, all while trying to derive a representative sample from the results.
Major changes in the market for political information have also increased the pressure to provide more detailed accounting of the methodological details of polling. It is now a rote exercise to invoke the mantra that that cable news and the 24-hour media market have “changed everything.” But an offshoot of the changing news market has been what, for lack of a better term, one might think of as the Politico-ization of the market for political news. Specialty publications (like the one you are reading now) have created a larger and faster flow of a more finely grained coverage of politics. One might argue about priorities and comprehensiveness, but the development of a political junkie market niche has meant more coverage that in many circumstances drives subsequent coverage in other more broadly focused media outlets.
This more detailed, constant focus on politics means that polls receive more attention than ever before. Following from this attention is greater commercial demand for public polls, more private or contracted polling being made public and more critical attention to every poll that is released. The ascension of Nate Silver and the FiveThirtyEight blog, largely focused on the analysis and interpretation of polls and polling, and enthroned at The New York Times since 2010, provides exhibit A for illustrating both the increased fascination with political polling and the increased difficulty of the enterprise. But the transformation of Silver into a geek celebrity is but one sign of the explosion of a niche market for attention to polls. It’s also evident in the growth of aggregation sites as well as firms who use the release of polls to build their brands and generate business.
Amidst all of this churn, the changes we introduced with the February 2013 poll, though relatively minor adjustments, will help us keep pace with the changing environment in the market for political information in a changing communications landscape. By increasing our sample size and providing more analyses of the data in our blog, we hope to provide interested parties with what they seek: in-depth coverage of the actors and issues that are driving important parts of the political process in the state. By providing a margin of error sensitive to the weighting process, we will make available a more finely tuned assessment of the degree of certainty that we attach to our results.
We will also continue a rare practice that we think is vital to the spirit of our enterprise and the missions of the organizations that co-sponsor the poll: We will continue to make all of our data sets available for downloading by anyone who wants to crunch the numbers on their own. You can find them all here at the Texas Politics Project.
This article originally appeared in The Texas Tribune at http://www.texastribune.org/2013/03/08/slight-changes-big-effects-uttt-polls/.