cheap and fast
Like just about everything else it touches, the internet lowers barriers to entry–the need for capital and infrastructure– for surveying as well.
Traditional surveys require either trained interviewers (face-to-face and phone) or a sometimes elaborate series of questionnaire mailings and followups. Internet surveys, on the other hand, are cheap to execute and return results in a matter of a few days. Sites like Survey Monkey offer a basic survey infrastructure for free and a more flexible one for a small monthly fee. As a practical matter, most responses to internet surveys tend to come within the first 72 hours after launch.
but survey design still matters
Although internet surveys are open to all comers in a way that traditional surveys are not, survey design is still an extremely important issue. The length and physical layout of the survey instrument are crucial, as is the relevance of questions to gathering the information desired, and freedom from bias in wording of the questions and the possible answer choices offered. We know that in traditional surveys, small changes in wording can lead to significant changes in responses. I think we have to assume that the same is true for internet surveys.
special issues with internet surveying
In any survey we have to distinguish between the target population, the people we want to find out information about, and the target frame, the set of people who are possible survey participants. Standing behind the survey is the assumption that the frame is a good proxy for the population.
In the case of a phone survey, we limit ourselves to people who have phone numbers. This might have been problematic in the 1930s, when Gallup learned to its cost that only wealthy people had them, but–subject to issues with cellphones–not today. Similarly, in an internet survey, we are limiting ourselves to people with internet access (if we’re going to gather responses from people visiting specific websites) or to people with email addresses (if we’re going to send one).
If we’re surveying the population of internet users about their overall internet involvement or about their email habits, then we probably don’t have a problem. But if we want to find out something about the elderly, or the poor, or about minority groups, internet surveys may not be a good medium.
finding a frame
Suppose we want to find something out about golfers. We could place banner ads on golf-related websites, or establish our own (fat chance that a lot of traffic would come to it, however). We could also rent for one-time use an email list of subscribers to a golf magazine or website, or an email list of people registered with golf equipment companies or golf retailers.
In the latter case, assuming there were an email list for rent, it would doubtless be one consisting, not of all subscribers/customers, but the subset consisting of those who have “opted in” to receive communications from third parties.
So the group we can sample from isn’t:
–the set of all golfers, or even
–of all golfers with internet access, or even
–the subset that has registered with an online site, but
–the subset of the last group that has said they’ll accept third-party inquiries.
We’re a pretty long way away from the group we want to study. Suppose it were the case that only people currently in prison say they’ll accept third-party email from a specific site. We might end up concluding that only people with criminal records, or who are currently incarcerated, play golf (how they’d do so is another question).
Sometimes, providers of lists will also furnish demographic data about the members of the list. The provider may also segment his list by income, occupation or some other variable that the purchaser wants to survey. It can easily be, however, that the data are self-provided by the members and not verified by the provider. Since they are subject to the possibility of “white lies” about, say, occupation, handicap, the number of rounds played, the type of equipment owned… they’re of limited use in checking on how representative of all golfers the list may be and they don;t give a lot of assurance that the list purchaser is getting the demographic he desires.
respondents vs. non-respondents
When not contacting the potential respondent directly but relying instead on banner ads or pop-ups on websites, it’s impossible to know how representative the respondents are. In addition, it’s impossible to detect multiple response providers or multiple refusers without potential violations of privacy.
As the case of the internet survey cited in my post on tax rates, earnings per share…, even in the cases where they’re known, response rates tend to be low. In that survey, the response rate was about a quarter of those queried. The researchers argue, pointing out examples, that this is far better than the roughly 10% response rate their colleagues have been getting. Maybe so, but it’s still a big leap of faith to assume that conclusions drawn about this small a pool of respondents hold for non-respondents as well.
It’s easy to run statistics tests that are designed to evaluate linkages between responses in order to draw conclusions from the survey. You’ll always get numbers. But will they have meaning? –not if the frame has already filtered out large components of the target population, or if it’s impossible to determine a response rate. You’ll just have a case of GIGO.
a convenience sample…
That’s what statisticians call a group of respondents, like those in any internet survey, where you can’t establish that the respondents form a random sample of the target frame. On a group like this, you can run statistical tests, but they’re not reliable. Notice, too, that in drawing conclusions from an internet survey, surveyers always say things like, “88% of respondents indicate…” They will never assert that they have polled a random sample of a target frame or that their results are valid either for the sample or for the frame. They’ll only claim validity for the group of respondents-admiting, without really calling attention to it, the limitations of the results.
First of all, welcome to the world of internet surveying. A convenience sample is the best you can do. You may be able to obtain from it lots of valuable qualitative information about the group you want to study, even if you can’t get rigorous quantitative information.
Clearly, all sorts of parties conduct internet surveys, draw conclusions from the results and act successfully on them. They range from makers of consumer products, like Apple, to the internet divisions of advertising and public relations agencies, to internet businesses like Google, Yahoo…
These companies all know how to select frames and interpret results in a practical manner. But because this skill is so valuable, it generally remains among a company’s most closely guarded trade secrets. It’s not in the public domain and not available through university courses or books–only through experience.
In many ways, this makes internet surveying like the investment business–dependent on professional judgment honed by years of practical experience, and a world away from not particularly relevant stuff taught in school by career academics.