Democratic Underground Latest Greatest Lobby Journals Search Options Help Login
Google

Questions about polling methodology (especially Rasmussen)

Printer-friendly format Printer-friendly format
Printer-friendly format Email this thread to a friend
Printer-friendly format Bookmark this thread
This topic is archived.
Home » Discuss » Archives » General Discussion: Presidential (Through Nov 2009) Donate to DU
 
Essene Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Oct-21-08 11:52 AM
Original message
Questions about polling methodology (especially Rasmussen)
http://www.rasmussenreports.com/public_content/about_us/methodology

Ok, it's just 1000 responses taken. I understand that they use sampling methods that try to weighht different sub-groups based on observed variability (i.e. you may need to sample less african american females than white males to observe trends in the wider populations).

2 points of concern... albeit this goes for many pollsters, not just Ras/Fox.

A) They rely entirely on a computer program to interpret automated telephone answers.

In a situation like Florida, i assume there are both a lot of people without registered phone numbers and also a lot of heavy accents that basically break the automation and validity (i.e. i assume the system asks you to repeat yourself if it cannot understand your response, and after a couple times of that i bet many folks just hang-up).

Are my assumptions correct? Is there documented proof that such an automated voice system is missing a statistically significant number of VOTERS (and i mean people who actually vote).

B) They rely on the "likely voter" method which can contain a number of distorting dynamics.

Let's be clear: rasmussen weights both the folks surveyed as well as the results of the survey, based on their beliefs about partisan variation... and also about how likely folks are to really vote.

They try to balance things based on their constant updated data on how people classify themselves as democrat, republican or independent. That's fair, but it's the kind of thing that's very "fuzzy."

More importantly, they literally filter and weight their results depending on their identification of respondents in terms of "likely voting." This gets very tricky, because a die-hard Republican might be very angry and frustrated that night... saying they wont vote. The next day, something energizing might cause them to express strong interest in voting. When pollsters rely on this "likely voter" method, they are opening themselves up to confusing results of political leanings... with results of enthusiasm. That Republican voter sudden shows up very strongly in the 2nd survey and can skew a real analysis of trends because while there is no real shift in mccain vs obama support... the strength of that support can tilt the results.

I hope that makes sense.

Has anybody reviewed the best way to handle this issue? Pollsters are businessmen, afterall.


Printer Friendly | Permalink |  | Top
milkyway Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Oct-21-08 12:01 PM
Response to Original message
1. Nate Silver of fivethirtyeight.com just posted this morning a review of all tracking polls
Printer Friendly | Permalink |  | Top
 
Essene Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Oct-21-08 12:31 PM
Response to Reply #1
4. about "pollster track records"
Edited on Tue Oct-21-08 12:32 PM by Essene
Just because you nailed polls in 2000 or 2004 doesnt mean your method or assumptions work in 2008.

That's kinda the heart of my problem this year with the entire polling drama.

It's like saying that because an investment firm had very good stock portfolio performance in 2000 and 2004 means they will do wonderfully in 2008.

Every election seems to have very unique dynamics and volatility within the population. When you start seeing record registration and turn-out, even quadruple figures among youth voters... it seems to me that it's hard make objective analysis.

Every poll handles the question of voter turn-out differently.

I am not comfortable swallowing any of the results given how dynamic the population is this year. What i do notice are strong longitudinal TRENDS in one direction... which is why the more qualitative statements by ALL the pollsters basically reflect the same truth.

Rasmussen may report that his last snapshot has Mccain up by 1% among likely voters, but his own webpage on the results says he maintains Obama at a 67% chance of winning. That's geek speak, at best...
Printer Friendly | Permalink |  | Top
 
Gormy Cuss Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Oct-21-08 12:07 PM
Response to Original message
2. On the first issue,
Edited on Tue Oct-21-08 12:32 PM by Gormy Cuss
The automated voice recognition systems are pretty good at understanding the major regional accents, but if respondents become frustrated and hang up early on, the interview is considered a "breakoff" and not included in the final 1000. If the respondent hangs up late in the survey, the interview may be usable and included in the analysis for the the subset of questions answered. Most surveys of 1000 respondents complete a few extra surveys because of the way interviewing is done and it may be possible to substitute one of these spares for a breakoff. If it isn't, the interviewing calls may continue until they are able to find someone in the quota.

Most polls miss the majority of voters by the very nature of the sample size. What's important is understanding the characteristics of voters and constructing a sample that will give results that are reasonably close to the actual outcome had the vote happened that day or week. Check this link at the American Association of Public Opinion Research for some discussions on polling:

http://www.aapor.org/
Printer Friendly | Permalink |  | Top
 
Essene Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Oct-21-08 12:24 PM
Response to Reply #2
3. "had the vote happened that day or week. " - this is my main beef with weighting of likely voters
Putting aside intricacies of how automated phone surveys may skew results... and even ignoring questions of sampling sizes and stratification of sub-group quotas...

My understanding is that weighting for "likely voters" includes an analysis of how strongly certain groups turn out to vote based on various "screening questions." These questions try to provide some type of indexed value on their commitment to voting. Rasmussen doesn't state it, but i assume they ALSO mix this with their own data & assumptions about voter turn-out trends.

Gallup themselves have said they add this step: http://www.gallup.com/poll/110272/Registered-Voters-vs-Likely-Voters.aspx

In other words, this method weights things partially based on the "feelings" of the day on the one hand, but also based on long-term voter turnout trends which may have little to do with the dynamics of the current day or week.

It's that last part that bothers me, especially when pollsters damn well know they cannot predict voter turn out in a year and in an election like this one.


Printer Friendly | Permalink |  | Top
 
Gormy Cuss Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Oct-21-08 12:58 PM
Response to Reply #3
5. Well, the pollsters have no clients if they don't maintain a high success rate .
That's why the legitimate ones expend an enormous amount of time and resources to fine tune methodology. They can't predict exact turnout but they are trying to predict the dynamics of the turnout by reaching a varied enough group of likely voters*. That's why some pollsters have added internet and cell phone polls, for example.

Pollsters know that a single poll is a snapshot, not a predictor. Well executed exit polls on the other hand should reflect the vote distribution assuming that nearly all votes were cast at the voting places or that absentee and advance voters were polled properly as well. That's because as we all know, the only poll that matters is the actual vote.



*As an aside, pollsters have noted that groups of self-identified "likely voters" produced polling results that were more consistent with election results than polls of registered voters, and both groups produced better results than general population surveys where voting status wasn't determined. That's why it was adopted as a filter. If that filter is proved to be unnecessary pollsters will drop it.
Printer Friendly | Permalink |  | Top
 
DU AdBot (1000+ posts) Click to send private message to this author Click to view 
this author's profile Click to add 
this author to your buddy list Click to add 
this author to your Ignore list Fri May 03rd 2024, 07:46 AM
Response to Original message
Advertisements [?]
 Top

Home » Discuss » Archives » General Discussion: Presidential (Through Nov 2009) Donate to DU

Powered by DCForum+ Version 1.1 Copyright 1997-2002 DCScripts.com
Software has been extensively modified by the DU administrators


Important Notices: By participating on this discussion board, visitors agree to abide by the rules outlined on our Rules page. Messages posted on the Democratic Underground Discussion Forums are the opinions of the individuals who post them, and do not necessarily represent the opinions of Democratic Underground, LLC.

Home  |  Discussion Forums  |  Journals |  Store  |  Donate

About DU  |  Contact Us  |  Privacy Policy

Got a message for Democratic Underground? Click here to send us a message.

© 2001 - 2011 Democratic Underground, LLC