Shaping a portfolio for 2015 (ii): energy

From 2011 through the first half of 2014, the world crude oil price averaged $110 a barrel, more or less (let’s not worry about quality differentials and stuff like that–this is the BIG (and simple) picture).  Now it’s $70.

World oil production is 90 million barels a day.  So a $40 a barrel reduction in price means $3.6 billion a day no longer leaving the pockets of oil consumers and landing in those of the oil producers.  That’s $1.3 trillion a year–or about 8% of the total GDP of the United States.

The important investment questions are:

–does oil stay at this price?, and

–who are the winners and losers.

Let’s take the second first.

oil production and consumption

net importing areas, i.e., winners

Asia         20.4 million bbl/day; of that, China accounts for 5.6 million, Japan 4.4, South Korea 2.5, India 2.3

Europe     10.5 million bbl/day

North America     4 million bbl/day;  the US imports 6.7 million, Canada and Mexico are both exporters

net exporting areas, i.e., losers

Middle East     19.2 milllion bbl/day;  Saudi Arabia is 7.1 million

Eurasia     8.8 million bbl/day; Russia is 7.2 million

Africa     5.7 million bbl/day.

The US is now the largest oil producing country in the world, at 12.3 million barrels/day.  It is followed by Saudi Arabia and Russia, both at 10+ million bbl/day.

On a net basis, Asians and Europeans get the biggest windfall from lower oil prices;  the Middle East and Russia lose the most.  The US situation is more complex.  On the one hand, the nation as a whole is a net winner from lower oil prices.  On the other, the net win is made up of large gains by drivers everywhere, airlines and heating oil users in colder areas, partly offset by substantial losses in oil-producing states like North Dakota and Texas.

second round effects

There are two varieties:

–historically, a considerable portion of the money collected by oil producing countries is not spent.  Instead, it’s saved, or “recycled” into international financial markets.  Taking the Middle East, Eurasia and Africa together, there’s now a half-trillion dollars a year being spent in dribs and drabs by consumers outside these areas rather than being parked in sovereign wealth funds, private equity or hedge funds.  Bad for fund managers and bankers, good for consumption.

–Some consumers are abandoning hybrids and starting to buy gas guzzlers again.  Some new shale oil projects may no longer be economical.  Some of the urgency is leaving the alternative energy area.  These counter-trend developments are probably too small to matter much today, but they ultimately have the potential to help reverse the price decline and therefore are worth monitoring.

the first question

My guess is that the oil price stays around where it is now.

But that’s really just a guess.

As investors, we have to deal with ambiguity and uncertainty every day.  It’s more important for me to understand that I’m using this assumption in structuring my portfolio than to be 100% sure that I’m correct.  That way I can keep my eye out for changes and plan what I’ll do when/if I see them. In any situation where a professional genuinely has no insight, the plain-vanilla strategy is to equal weight the area.  I imagine that most professionals will have less than the S&P 8% weithging in Energy going into 2015, however.

In the active portion of my holdings, I’ve had virtually no Energy for some time.  I’m going to continue that stance. But I’m going to look around for some Retail or Restaurants to add in the US or EU. I’m leaving my passive holdings alone.  I suppose I could short an oil ETF, but I’m confident in my case that that wouldn’t work out well. At some point, well ahead of any reversal in the oil price, the stocks in the Energy sector will bottom out.  We should be watching for this.  I don’t think we’re anywhere near that point yet, however.

iPad 2 is likely to be a big success: Boston Consulting Group survey

the Boston Consulting Group survey

The iPad 2 goes on sale this Friday.  It’s faster than the original iPad–as well as sleeker and lighter.  It comes equipped cameras and is available in two colors.  A recently-released internet survey of over 14,000 respondents done by the Boston Consulting Group last December suggests that iPad 2 will be a much bigger success than its predecessor.  This survey follows up on a previous one done in March 2010, just before the launch of the first iPad.

its conclusions

The main conclusions of the December 2010 survey, with is actually about both tablets and e-readers, are:

1.  Awareness of this category of devices is growing.  In the US, for example, 67% of respondents to the survey knew about tablets and e-readers.  That’s up from 54% in the December poll (I wonder where the other 33% live).

2.  Lots more people intend to buy one. Globally, 69% of respondents who are familiar with tablets and e-readers intend to buy one in the next three years.  That’s slightly smaller percentage than the 73% of people from the March survey.  Given that awareness has increased so much, though, the pool of potential buyers is still much deeper than it was a year ago.  Applying the figures to the US, for example, suggests that 17% more Americans want to buy a device now than a year ago.  Half plan to pull the trigger in the next 12 months.

3.  Consumers want tablets, not e-readers.  The margin is 3.5/1 in favor of tablets.

4.  The market understands what these devices do. Respondents said they wanted to use the devices to browse online (85%), read email (84%) and view videos (69%).

5.  People are willing to pay for content…

(Note:  my experience is that people aren’t crazy.  They flat-out lie to surveyors about the prices they’d be willing to pay for stuff.  They regard money questions as part of a price negotiation and give low-ball numbers.  Wouldn’t you?  So I regard the content responses as very encouraging.)

US respondents said they’d pay $5-$10 for a digital book, $3-$6 per month for a digital magazine subscription and $5-$10 a month for a daily newspaper.  These are roughly the same numbers people gave last March.  The figure that jumps out to me as especially high is the magazine one.

6. …but not for the device itself. Respondents from the US say they’d pay $130 for an e-reader (which they don’t particularly want), but  only about $200 for a tablet (which they do).  See my note to point 5.

All in all, the picture looks very good for AAPL.

methodology

BCG had 14,314 respondents from 16 countries:  Australia, Austria, China, Finland, France, Germany, Hong Kong, Italy, Japan, Norway, South Korea, Spain, Switzerland, Taiwan, the UK and the US.  Each provided at least 700 respondents, split equally between male and female.  All were internet users (duh!), and read print books or periodicals.  In Australia, South Korea and China, respondents tended to be clustered around cities; elsewhere they were distributed proportionally in urban and rural areas.

The big advantages of internet surveys is that they’re fast, cheap and can reach lots of people.  The main worry is that the techniques used in traditional surveying to figure out whether respondents really mirror the population you want to find out about don’t work.  See my post on internet surveying for more details.

 

world financial center survey: London, New York, Hong Kong tied at the top

Z/Yen

A London-based consulting group called Z/Yen (the name is supposed to mean risk/reward) has been compiling rankings of the world’s financial centers semiannually for the past four years. The first seven lists were underwritten by the City of London, the latest by the Financial Center Authority of Qatar.

The results

The September 2010 shows a virtual dead heat for first place among global financial centers among:

–London

–New York, and

–Hong Kong.

The remainder of the top ten, in descending order, are:

–Singapore

–Tokyo

–Shanghai

–Chicago

–Zurich

–Geneva

–Sydney.

The bottom of the pile of 75 cities rated are, again in descending order:

–Athens

–Tallinn

–Reykjavik

patterns

Although the survey has been going on for only a short period of time, a number pf patterns have begun to emerge:

the steady rise of Asian centers.

— Z/Yen predicts that Singapore will soon emerge as a world co-leader with the present top three.

–Hong Kong and Shanghai have shown the most improvement from list to list

–survey participants name Shenzhen, Shanghai and Singapore as their picks for the cities with the most upward potential

tax havens losing favor

–The Cayman Islands and the Bahamas are among the havens showing the greatest falls in ranking, all all tax-favored centers are declining. Oddly, Scandinavia is the other area on the wane.

methodology

The ranking is obtained by combining the results of an internet survey of financial professionals with an analysis of “instrumental factors” selected to describe the objective working conditions in a given city.

For this list, Z/Yen obtained input from 1,876 survey participants, who made a total of 33,023 city rankings.

The instrumental factors fall into five groups: people, business environment, infrastructure, market access and general competitiveness. Specific factors include things like office rents, personal and corporate income tax rates, and indices of corruption and regulatory opacity.

quirks

All of the top names on the list—down almost to the middle, in fact—exhibit a reputational glow. That is to say, the ratings derived from the online survey questionnaire are significantly higher than those obtained from statistical analysis of the instrumental factors alone, although the rank order remains the same. My guess is this is because the survey participants rank on average just shy of twenty cities. How can they know that many?

The perennial question about internet surveys (see my posts on surveying) is that there’s no way of telling whether the respondents to the survey are characteristic of the overall group whose opinion you want to obtain. Relative to the survey results that, say, the Census Bureau, get, online surveys have to be regarded as not 100% reliable.

On particular items in the survey, one section looks at the regional breakdown of favorable and unfavorable ratings. Everyone agrees that London and New York are the top two financial centers. Europeans, however, are very skeptical of Asian financial cities. The US joins Europe in worries about Shanghai. No one likes the tax havens other than the havens themselves.

my thoughts

My guess is that the list is fairly reliable.

The rise of Hong Hong doesn’t surprise me too much, since that entrepreneurial city has constantly reinvented itself over the years. It still has an advantage over the mainland in support services for financial professionals.

I find the emergence of Singapore interesting, although that city-state has been undergoing a thorough makeover during the past decade.

mutual fund investors and their investment advisors

the ICI survey

I was looking on the Investment Company Institute (the trade organization for the fund management industry) website for aggregate data on the size of tax losses held inside mutual funds and ETFs–the topic of Sunday’s post–when I found a report on a survey of mutual fund investors and their investment advisors.  This was supplemented by a later survey that elaborates on the types of financial advisors used.  I thought the information was interesting, and certainly not what I had expected even though I marketed my products to financial advisors for twenty years.  Here are the survey results:

preliminaries

The survey was done by phone in 2006, before the financial meltdown.  1003 households were interviewed by a third-party professional surveying company.  The report didn’t contain either the survey questionnaire or the raw survey data.

Two characteristics of phone surveys to keep in mind:

–they almost never use cellphone numbers because laws in most states prevent machine dialing of cells, so surveys that include them are more expensive.  This distorts the twenty-something demographic, which probably isn’t so important in this case.

–phone respondents tend to portray themselves in what they consider a more favorable, or more conventionally acceptable, light than they would in an internet survey.

The survey wanted to find out about financial assets held outside workplace retirement plans.

The survey defined a financial advisor as “someone who makes a living by providing investment advice and services.”  This includes not just traditional “full service” brokers, but also independent financial planners, bank and financial institution investment representatives, insurance agents and accountants.

the customer base

1.  Almost all the respondents (82%) had access to professional financial advice.

–Almost half (49%) bought mutual funds exclusively through financial advisors.

–A third bought both through advisors and on their own (through discount brokers or directly from fund companies).  No explanation for this behavior, although I think many customers try to control the fees they pay to financial professionals by maintaining two accounts.  One will be  wrap-fee account with a financial advisor;  the second will be a no-fee discount broker “clone” of the first.

–14% bought exclusively on their own.

–4% had no clue where the funds came from.

A total of 60% of the assets were held through financial advisors.

why customers seek advice

For most customers, there’s an event that triggers the search for a financial advisor.

For people in their twenties or fifty-plus, the event is usually receipt of a large lump sum, either an inheritance or payout of a work-related investment account.

For thirty- or forty-somethings, the event is lifestyle-related, usually marriage or the birth of a child.

what they need

The top four things customers want from a financial advisor are:

1.  help with asset allocation

2.  an explanation of the characteristics of the financial instruments they can buy

3.  help in understanding their overall financial situation

4.  assurance that they’re saving enough to meet their financial goals

Although a large minority(about 40%) of respondents seem to want to turn their money over to an advisor and forget about it, most regard their advisor (correctly, I think) as a consultant rather than a money manager and want to play an active part in making the decisions that define their portfolios.

demographics of advice seekers

The predominant characteristic of people with ongoing relationships with financial advisors is that they don’t use the internet to get financial information.  This group is twice as likely to have a financial advisor as those who do use the internet for financial data.  Here the survey really seems to break down, because it doesn’t say whether these customers don’t use the internet to get any information (my guess) or whether it’s just financial information they get elsewhere–if they get any at all.

What’s also interesting is that this (Luddite) behavior is not characteristic of mutual fund holders as a whole.  Other ICI research from around the same time shows that mutual fund owners tend to be intensive users of the internet, with financial information a particular area of interest.  Apparently, this latter–probably younger and more affluent–group doesn’t use financial advisors.

The other ICI research also suggests that the third of respondents who had some advisor-related funds and some not were predominantly in the latter camp.  The fact that 60% of the assets were bought through financial advisors suggests that the non-internet users are substantially wealthier, and probably older, than internet savvy respondents to the financial advisor survey.

Female decision maker households are 50% more likely than average to have an ongoing financial advisory relationship, as are families with over $250,000 in household assets (remember, this is pre-crisis).

The fourth defining characteristic is age. Respondents who were 55+ were 40% more likely than average to have a financial advisor.

who doesn’t want a financial advisor?

This group, a small minority according to the survey, has three defining attributes:

–they want control of their own investments, a desire that increases in intensity with age

they (think they)know enough and have access to all the resources they need to make intelligent decisions on their own.  Sixty-somethings and older hold this conviction the most strongly, followed by the under 45 set.  Those in the 45-59 bracket think so too, but have more doubts.

they don’t like advisors. They think they’re too expensive and that they put their own interests ahead of their clients’.

One in seven respondents, under 45 more often than not, said that they don’t need professional advice because they get it for free from a friend or family member.  Other than my children–who get excellent, if aggressive, investment advice, this group seems to be one fated to live on public assistance later in life.

my thoughts

I wonder if a survey conducted today would get the same results?

Despite long-term planning and all that, many individual investors seem to have sold their equities at the bottom and put the money into bonds, missing the subsequent equity rebound.  According to ICI data, they continue to allocate assets away from stocks and into bonds, despite the fact that bonds haven’t been so expensive vs. stocks in almost sixty years.  Is this conservative move spurred on by financial advisors?  Probably not.

I remember a story that ran in the Wall Street Journal just after the stock market collapse of 1987.  It was about a prescient retail broker in Connecticut who called up all his clients in late summer of 1987, just before the crash, and convinced them to sell all their stocks–which they did.  He called them back in November, at the market bottom, to advise them to buy again.  No one returned his calls.  He packed up and left for Oregon to try to rebuild his business there.

Maybe the same has been happening today.

Another aspect to 1987.  I think the market decline marked a paradigm shift by individual investors.  Prior to that, people typically bought individual stocks through full-service brokers.  Post-crash, I think that many individuals, like those Connecticut customers, lost faith in brokers and turned to independent financial advisors and mutual funds.

Does the financial crisis mark another structural turning point?  Maybe.  If so, it’s probably away from mutual funds to ETFs and away from using financial advisors as consultants with specialized financial expertise to self-reliance.

Internet surveying

cheap and fast

Like just about everything else it touches, the internet lowers barriers to entry–the need for capital and infrastructure– for surveying as well.

Traditional surveys require either trained interviewers (face-to-face and phone) or a sometimes elaborate series of questionnaire mailings and followups.  Internet surveys, on the other hand, are cheap to execute and return results in a matter of a few days.  Sites like Survey Monkey offer a basic survey infrastructure for free and a more flexible one for a small monthly fee.  As a practical matter, most responses to internet surveys tend to come within the first 72 hours after launch.

but survey design still matters

Although internet surveys are open to all comers in a way that traditional surveys are not, survey design is still an extremely important issue. The length and physical layout of the survey instrument are crucial, as is the relevance of questions to gathering the information desired, and freedom from bias in wording of the questions and the possible answer choices offered.  We know that in traditional surveys, small changes in wording can lead to significant changes in responses.  I think we have to assume that the same is true for internet surveys.

special issues with internet surveying

coverage

In any survey we have to distinguish between the target population, the people we want to find out information about, and the target frame, the set of people who are possible survey participants.  Standing behind the survey is the assumption that the frame is a good proxy for the population.

In the case of a phone survey, we limit ourselves to people who have phone numbers.  This might have been problematic in the 1930s, when Gallup learned to its cost that only wealthy people had them, but–subject to issues with cellphones–not today.  Similarly, in an internet survey, we are limiting ourselves to people with internet access (if we’re going to gather responses from people visiting specific websites) or to people with email addresses (if we’re going to send one).

If we’re surveying the population of internet users about their overall internet involvement or about their email habits, then we probably don’t have a problem.  But if we want to find out something about the elderly, or the poor, or about minority groups, internet surveys may not be a good medium.

finding a frame

Suppose we want to find something out about golfers.  We could place banner ads on golf-related websites, or establish our own (fat chance that a lot of traffic would come to it, however).  We could also rent for one-time use an email list of subscribers to a golf magazine or website, or an email list of people registered with golf equipment companies or golf retailers.

In the latter case, assuming there were an email list for rent, it would doubtless be one consisting, not of all subscribers/customers, but the subset consisting of those who have “opted in” to receive communications from third parties.

So the group we can sample from isn’t:

–the set of all golfers, or even

–of all golfers with internet access, or even

–the subset that has registered with an online site, but

–the subset of the last group that has said they’ll accept third-party inquiries.

We’re a pretty long way away from the group we want to study.  Suppose it were the case that only people currently in prison say they’ll accept third-party email from a specific site.  We might end up concluding that only people with criminal records, or who are currently incarcerated, play golf (how they’d do so is another question).

Sometimes, providers of lists will also furnish demographic data about the members of the list.  The provider may also segment his list by income, occupation or some other variable that the purchaser wants to survey.  It can easily be, however, that the data are self-provided by the members and not verified by the provider.  Since they are subject to the possibility of “white lies” about, say, occupation, handicap, the number of rounds played, the type of equipment owned… they’re of limited use in checking on how representative of all golfers the list may be and they don;t give a lot of assurance that the list purchaser is getting the demographic he desires.

respondents vs. non-respondents

When not contacting the potential respondent directly but relying instead on banner ads or pop-ups on websites, it’s impossible to know how representative the respondents are.  In addition, it’s impossible to detect multiple response providers or multiple refusers without potential violations of privacy.

As the case of the internet survey cited in my post on tax rates, earnings per share…, even in the cases where they’re known, response rates tend to be low.  In that survey, the response rate was about a quarter of those queried.  The researchers argue, pointing out examples, that this is far better than the roughly 10% response rate their colleagues have been getting.  Maybe so, but it’s still a big leap of faith to assume that conclusions drawn about this small a pool of respondents hold for non-respondents as well.

statistical analysis

It’s easy to run statistics tests that are designed to evaluate linkages between responses in order to draw conclusions from the survey.  You’ll always get numbers.  But will they have meaning?  –not if the frame  has already filtered out large components of the target population, or if it’s impossible to determine a response rate.  You’ll just have a case of GIGO.

a convenience sample…

That’s what statisticians call a group of respondents, like those in any internet survey, where you can’t establish that the respondents form a random sample of the target frame.  On a group like this, you can run statistical tests, but they’re not reliable.  Notice, too, that in drawing conclusions from an internet survey, surveyers always say things like, “88% of respondents indicate…”  They will never assert that they have polled a random sample of a target frame or that their results are valid either for the sample or for the frame.  They’ll only claim validity for the group of respondents-admiting, without really calling attention to it, the limitations of the results.

…isn’t nothing

First of all, welcome to the world of internet surveying.  A convenience sample is the best you can do.  You may be able to obtain from it lots of valuable qualitative information about the group you want to study, even if you can’t get rigorous quantitative information.

Clearly, all sorts of parties conduct internet surveys, draw conclusions from the results and act successfully on them.  They range from makers of consumer products, like Apple, to the internet divisions of advertising and public relations agencies, to internet businesses like Google, Yahoo…

These companies all know how to select frames and interpret results in a practical manner.  But because this skill is so valuable, it generally remains among a company’s most closely guarded trade secrets.  It’s not in the public domain and not available through university courses or books–only through experience.

In many ways, this makes internet surveying like the investment business–dependent on professional judgment honed by years of practical experience, and a world away from not particularly relevant stuff taught in school by career academics.

Surveys (I): general

In my post of two days ago, I referenced an academic article based on an internet survey conducted by three professors.  In this post, I want to write about the quirks of internet surveying, as far as I know them.  Internet surveying is in its infancy, however, and, as I see it, most of the craft skill involved in it is still kept as trade secrets in the firms doing this cutting-edge survey work.

(My own experience with surveying comes from a couple of years I spent as an business school adjunct, working on a course whose heart was traditional and internet surveying.  I was lucky enough to work with colleagues had many years of practical experience in surveying, so I learned a lot. Unfortunately, that’s all in the past.  From an academic point of view, my area–although very popular with students–had several defects:

–we had on average maybe twenty years of actual business experience

–we did more teaching (for much less money) than tenured professors

–we were unique in producing an operating profit for a business school awash in a sea of red ink.

What happened?  As a “cost-cutting” measure, the school discontinued the program and laid us all off.)

Anyway, I think the best way to understand internet surveying is to contrast it with traditional surveying, done through the mail, by phone or in-person interviews.  A thumbnail sketch of the latter is what I’m going to write about in this post.  Tomorrow’s will talk about the internet.  Here goes:

traditional surveys

Traditional surveying is a little more than a century old.  Its model is the government census that countries periodically perform, although surveyers rapidly expanded its use into such diverse areas as political polling, including election-day exit polls, and divining consumer attitudes, either consumers’ general frame of mind, or the attributes of specific products they like and dislike.

Researchers assume, with a lot of historical justification, that standard statistical methods can be used to draw reliable quantitative conclusions about the data.

their structure

Every survey starts with information that the surveyer wants to find out about a target population.  Let’s say the trade association for American cereal manufacturers wants to know what people eat for breakfast in the US and how it might get non-cereal eaters to switch to cereal.

There’s a whole subsector of the surveying industry whose job is to turn that desire for information into a specific survey instrument, whose questions are designed to get the required information.  A lot of effort will go into designing questions that minimize the chances that the respondent will misunderstand them, and crafting answer choices that minimize the possibility that the respondent ends up picking the wrong choice by mistake.

Tons of research has been complied over the years, a lot of it the result of trial and error, about how to do this.  There’s even more about how to follow up and how to persuade people to become respondents.

There are many tricks of the trade.  Other than to point out that sometimes small changes to a question’s phrasing or to a survey’s layout on paper can make a big difference to the answers respondents give, I’m going to skip over this.

steps in conducting the survey

target population

In the cereal survey I mentioned above, the target population is everybody in the US.  But, as the periodic government censuses show, even the government isn’t going to get to communicate with everybody.  And who else has the money to try?

sampling frame

Even the government has to select a sampling frame, that is, a collection of members of the target population who actually have a chance of being surveyed.  Our cereal trade group might decide, for example, that it will take the set of people who are listed in all the telephone books in the US as its sampling frame.  Or it could take the set of all people with street addresses.  Or, at the other end of the spectrum, it could decide to purchase the one-time use of the contact lists of a number of newspapers and magazines.

Clearly, the sampling frame and the target population are not the same thing.  Squatters or migrant workers probably won’t have street addresses.  A potentially more serious problem, a large percentage of Americans under thirty don’t have fixed-line phones, but use cellphones instead.  Among the complications with this group, interviewers are legally barred from using computer dialing machines to access cellphones.  And many people aren’t happy about interviewers using up their minutes.  There are workarounds, but for how long?

The selection of the frame is obviously also bound up with the type of survey you decide to do.  If it’s a phone survey, you’ll only be able to contact people with phones that work–most likely landlines.  If it’s a mail survey, you’re limited to the names and addresses you have access to.

The potential mismatch between the target population and the set of people you can actually reach with a survey instrument is called coverage error. It’s becoming an ever bigger issue, I think.

sample

Let’s say the cereal group decides to do a telephone survey and has access to a database with 50 million phone numbers.  Instead of calling everyone, it will select a random sample from the 50 million.  The sample size can be quite small, say, a thousand or two numbers.  There are well-established conventions for selecting the sample–that dictate the minimum size and govern now the individual numbers are picked (usually computer-generated, and checked against phone databases).

respondents

Not everyone in the sample will respond.  Non-response comes in two flavors:  a refusal to answer a specific question or a refusal to answer the entire survey.  Non-response rates are the lowest for face-to-face interviews, which are also by far the most expensive to administer.  They are higher for telephone interviews and the highest for survey instruments sent through the mail.

Men tend to decline to answer more frequently than women.  City dwellers decline more often than their country cousins.  For many populations, a request from a university or from the government yields higher response rates.

Non-response rates have been steadily rising over the years, however.  In fact, response rates for very recent mail surveys may be as low as 1% or 2%.  Response rates for phone interviews may be 25%-30%.

Nonresponse error is an increasingly serious problem.  If response rates are low, say 10% or 20% (and even these levels may be hard to achieve), you have to at least worry that only the lunatic fringe has responded to your survey and their responses are in no way indicative of what the sample as a whole is thinking. Traditionally, this is the single biggest headache for surveyers.

post-survey adjustments

Statisticians may adjust the actual responses to make them more meaningful, in either of two ways:

–if a respondent hasn’t answered a particular question, like family income, an estimate based on past experience may be substituted, and

–the survey may be reweighted to adjust for known differences in sub-group response rates, such as the tendency for urban response rates to be lower than rural ones.

Three types of traditional survey

mail

The greatest bulk of past research work has been done for mail surveys.  Respondents have historically been more truthful in mail surveys than in phone or person-to-person interviews.  But partly because of changes in the way people communicate with each other in the post-internet world, partly because junk mail companies have become increasingly clever in disguising their offerings as “legitimate” mail, response rates have dropped to very low levels.

Paper surveys are a thing of the past for everyone except the government.

phone

Historically, other than the issue of higher cost, the biggest risk with phone surveys is that people tend to be less than truthful.  For example, if the interviewer asks for the head of the household, the person who answers the phone is likely to say he/she is that person, whether this is true or not.  Also, interviewees tend not to give answers they regard as socially unacceptable.

In today’s world, however, the overwhelming problem with phone interviews is the inability to reach cellphone-armed twenty-and thirty-somethings, as well as households that have switched to cable or other phone providers.

For political polls, this may not be a burning issue, since younger people tend to vote less than older citizens.  There’s also some evidence that young landline users, in political polls anyway, may be an adequate substitute for their untethered peers.

But it would be one for our cereal trade group.

in person

This gathers the most information, but it’s expensive and time-consuming.  It’s harder to train and supervise face-to-face interviewers than telephone workers.  And computer dialing machines can let a phone interviewer race from number to number, while an in-person interviewer has a lot of transit time getting from one interview to the next.

summary

That’s the traditional survey world.  Statistically valid conclusions drawn from data derived from surveying small samples of target frames that most of the time pretty accurately represent the target population.  Going on for over a century, most of the kinks have been worked out.

Two problem areas:  declining response rates across all survey types, and, in the case of phone surveys, the worry that the target frame of landline users, the meat and potatoes of this kind of survey, may not accurately represent the underlying population, which includes a growing number of cellphone-only people as well.

That’s it for now.  Internet surveying tomorrow.