Powered By Blogger

Publius Speaks

Publius Speaks
Become A Follower

6/11/2012

Hung-up On Polls?

The time has come to challenge the newscasters and pundits (or, more correctly, the employers of same) about the use of polls to report what appear to be facts.  Polls are tricky.  They can contain biased questions, skewed cohorts (interviewees), and a limited total number interviewed.  However, the worst abuse is the fact that these polls are provided by pundits and newscasters as though none of their shortcomings exist.  It’s past time for polls to see the light of transparency.

No radio station, TV channel, newspaper, magazine, periodical, or any licensed communications entity should be allowed to use any poll results unless all the aspects of that poll are revealed to the public, either by announcing them along with the poll’s results, or by posting them on a related website.  It is time for transparency instead of misleading impressions.  Polls are not facts, and should not be reported as such!

“It's amazing how apathetic and accepting Americans have become to the relentless barrage of half-truths masquerading as hard-core fact. Sure, everybody with common sense assumes advertisers use questionable data, and political polls are notoriously loaded with inaccuracies. But how often are the motives of scientific research funders examined? And how many people realize the influence a Gallup has over legislative policy? In reporter-editor Crossen's book, such questions are vigorously prosecuted, and the answers are frightening. Her extensive research points to the way major businesses (pharmaceutical and tobacco companies are two notorious examples) produce sham data to support their own products' benefits and have been able to convince people despite such data's variance from common sense.” (book review on amazon.com)

Wikipedia provides us with some basic polling inaccuracies:

Polls based on samples of populations are subject to sampling error which reflects the effects of chance and uncertainty in the sampling process. The uncertainty is often expressed as a margin of error.  One example is the percent of people who prefer product A versus product B. When a single, global margin of error is reported for a survey, it refers to the maximum margin of error for all reported percentages using the full sample from the survey.  Others suggest that a poll with a random sample of 1,000 people has margin of sampling error of 3% for the estimated percentage of the whole population.
A 3% margin of error means that if the same procedure is used a large number of times, 95% of the time the true population average will be within the 95% confidence interval of the sample estimate plus or minus 3%. The margin of error can be reduced by using a larger sample, however if pollsters wish to reduce the margin of error to 1% they would need a sample of around 10,000 people.  In practice, pollsters need to balance the cost of a large sample against the reduction in sampling error and a sample size of around 500–1,000 is a typical compromise for political polls. (Note that to get complete responses it may be necessary to include thousands of additional participants.)

Another source of error stems from faulty demographic models by pollsters who weigh their samples by particular variables such as party identification in an election. For example, if you assume that the breakdown of the US population by party identification has not changed since the previous presidential election, you may underestimate a victory or a defeat of a particular party candidate that saw a surge or decline in its party registration relative to the previous presidential election cycle.  Over time, a number of theories and mechanisms have been offered to explain erroneous polling results. Some of these reflect errors on the part of the pollsters; many of them are statistical in nature. Others blame the respondents for not giving candid answers.

Non-response bias
Since some people do not answer calls from strangers, or refuse to answer the poll, poll samples may not be representative samples from a population due to a non-response bias. Because of this selection bias, the characteristics of those who agree to be interviewed may be markedly different from those who decline. That is, the actual sample is a biased version of the universe the pollster wants to analyze. In these cases, bias introduces new errors, one way or the other, that are in addition to errors caused by sample size. Error due to bias does not become smaller with larger sample sizes, because taking a larger sample size simply repeats the same mistake on a larger scale. If the people who refuse to answer, or are never reached, have the same characteristics as the people who do answer, then the final results should be unbiased. If the people who do not answer have different opinions then there is bias in the results. In terms of election polls, studies suggest that bias effects are small, but each polling firm has its own techniques for adjusting weights to minimize selection bias.

Response bias
Survey results may be affected by response bias, where the answers given by respondents do not reflect their true beliefs. This may be deliberately engineered by unscrupulous pollsters in order to generate a certain result or please their clients, but more often is a result of the detailed wording or ordering of questions (see below). Respondents may deliberately try to manipulate the outcome of a poll by e.g. advocating a more extreme position than they actually hold in order to boost their side of the argument or give rapid and ill-considered answers in order to hasten the end of their questioning. Respondents may also feel under social pressure not to give an unpopular answer.

Wording of questions
It is well established that the wording of the questions, the order in which they are asked and the number and form of alternative answers offered can influence results of polls. For instance, the public is more likely to indicate support for a person who is described by the operator as one of the "leading candidates". This support itself overrides subtle bias for one candidate, as does lumping some candidates in an "other" category or vice versa. Thus comparisons between polls often boil down to the wording of the question. On some issues, question wording can result in quite pronounced differences between surveys. This can also, however, be a result of legitimately conflicted feelings or evolving attitudes, rather than a poorly constructed survey.  A common technique to control for this bias is to rotate the order in which questions are asked. Many pollsters also split-sample. This involves having two different versions of a question, with each version presented to half the respondents.

Coverage bias
Another source of error is the use of samples that are not representative of the population as a consequence of the methodology used. For example, telephone sampling has a built-in error because in many times and places, those with telephones have generally been richer than those without.

In some places many people have only mobile telephones. Because pollsters cannot call mobile phones (it is unlawful in the United States to make unsolicited calls to phones where the phone's owner may be charged simply for taking a call), these individuals will never be included in the polling sample. If the subset of the population without cell phones differs markedly from the rest of the population, these differences can skew the results of the poll. Polling organizations have developed many weighting techniques to help overcome these deficiencies, to varying degrees of success. In previous elections, the proportion of the general population using cell phones was small, but as this proportion has increased, the worry is that polling only landlines is no longer representative of the general population. In 2003, 2.9% of households were wireless (cell phones only) compared to 12.8 in 2006. This results in "coverage error".

So what can be done about media reporting of poll data without the actual data components?  Probably very little without enormous pressure from consumers.  A petition might be a place to start.  Letters, phone calls and e-mails to major media corporations might be another.  The same to corporate sponsors might be helpful.  But no effort to change this biased reporting can be successful without the tenacity of a bulldog!  Because polls sometimes create sensational news, and because they get the attention of many in the media audience, there is a reluctance to change the deception of this reporting into a modicum of transparency. 

The tragedy is, of course, that voters tend to be influenced by political polls in their voting.  Again from Wikipedia:

“By providing information about voting intentions, opinion polls can sometimes influence the behavior of electors, and in his book The Broken Compass, Peter Hitchens asserts that opinion polls are actually a device for influencing public opinion.  A bandwagon effect occurs when the poll prompts voters to back the candidate shown to be winning in the poll. The idea that voters are susceptible to such effects is old, stemming at least from 1884.  George Gallup spent much effort in vain trying to discredit this theory in his time by presenting empirical research. A recent meta-study of scientific research on this topic indicates that from the 1980s onward the Bandwagon effect is found more often by researchers.

The opposite of the bandwagon effect is the underdog effect. It is often mentioned in the media. This occurs when people vote, out of sympathy, for the party perceived to be "losing" the elections. There is less empirical evidence for the existence of this effect than there is for the existence of the bandwagon effect.

“Some jurisdictions over the world restrict the publication of the results of opinion polls in order to prevent the possibly erroneous results from affecting voters' decisions. For instance, in Canada, it is prohibited to publish the results of opinion surveys that would identify specific political parties or candidates in the final three days before a poll closes.
However, most western democratic nations don't support the entire prohibition of the publication of pre-election opinion polls; most of them have no regulation and some only prohibit it in the final days or hours until the relevant poll closes.” 

We desperately need such restrictions in addition to the full disclosure of every poll’s methodology and questions.  So, don’t get hung-up on the latest poll. Just keep harboring a great deal of doubt as to accuracy when you hear about those latest political polls.