Widgetized Section

Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone

Old School or New Tech: What’s the Better Way to Survey?

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Thomas Miller
July 2, 2018

Old school surveys invite a random sample; new tech surveys allow anyone to opt-in on the Web

There are surveys and there are scientific surveys. These days, scientific surveys—ones with unbiased questions, asked of a randomly selected sample of residents, with proper weighting of results to make the sample’s demographic profile and aggregate answers similar to the community’s—compete with cheaper surveys offered to anyone on the Internet who has a link to survey questions. The inexpensive surveys are called “Opt-In” because respondents are not selected; they choose to come to the survey with no special invitation.

As time crunches and budgets shrivel, the cheap, fast, Web surveys become hard to resist, especially if the results they deliver were pretty much the same as those that come from more expensive scientific surveys. The problem, for now though, is that the results are not the same.

Survey researchers are working to understand the differences

For about three years, my company (National Research Center, Inc.-NRC) has been offering, with its scientific surveys, opt-in surveys of the same content, simply posted on a local government’s website – after the controlled survey is done. Not only does the opt-in survey give every resident an opportunity to answer the same questions asked of the randomly selected sample, it gives NRC an opportunity to explore the differences in ratings and raters from the two respondent groups.

Over the period, NRC’s research lab has explored how scientific surveys (mostly conducted using U.S. mail) differ from Web Opt-in surveys in response and respondents across close to 100 administrations of The National Citizen Surveys™ (NCS), a survey of resident opinion about community life and local government services.  NRC is working to identify the kinds of questions and the best weights to modify opt-in results so that they become adequate proxies for the more expensive probability/scientific surveys. We are not alone. The American Association of Public Opinion Research (AAPOR) studies this as well, and if you are in survey research but not studying what are called “non-probability” samples (the broad category encompassing opt-in surveys), you already are behind the curve.

Respondents to scientific or opt-in surveys are different

On average, those who opted to take the self-selected version of The NCS on the Web had different demographic profiles than those who were randomly selected and chose to participate. The opt-in respondents more often had higher incomes than those who responded to the scientific survey. The opt-ins more often were single family home owners, paid more for housing than the randomly selected residents, were under 45 years old, had children and primarily used a mobile phone.

But as noticeable as those differences were across scores of comparative pairs of surveys, the biggest “physical” differences in the two groups came in the activities they engaged in. The Opt-in cohort was far more active in community than was the group responding to the scientific surveys. For example, those who responded to the Opt-In survey were much more likely to: contact the local government for help or information, attend or view a government meeting or event, volunteer, advocate for a cause, participate in a club, even visit a park.

Responses also differ between the Opt-In and the scientific survey takers

Even if the people who respond to opt-in surveys compared to those who participate in “scientific surveys” are from different backgrounds or spend time differently, as was clear from the comparisons we made, their opinions may be about the same. Curiously if we look only at the average difference between ratings given to community characteristics or services, the Opt-In and “scientific” responses look a lot alike.  The average difference in ratings across 150 plus questions from close to 100 pairs of surveys amounted to only about one point (on a 100 point scale), with the opt-in respondents giving the very slightly lower average rating.

But behind the average similarity lurks important differences. In a number of jurisdictions, large differences between ratings coming from Opt-In respondents and the “scientific” respondents occurred even when the average differences across jurisdictions was small.

For example, take the rating for “neighborhood as a place to live.” The average across 94 jurisdictions for both the Opt-in survey and the scientific survey was 84 percent rating as excellent or good – for BOTH kinds of surveys. But not every pair of surveys from the many jurisdictions yielded the same rating – just the average across the jurisdictions.

When we examined each pair of the 94 jurisdictions’ ratings of “neighborhood as a place to live,” 20 of the results were six or more points different from each other.  In these 20 jurisdictions, ratings of neighborhoods was sometimes much higher from the Opt-in respondents and sometimes it was much higher from the “scientific” respondents.

Imagine that a client decided to relinquish its trend of scientific surveys to conduct its next survey using only the opt-in version, where a steep decline in the rating for neighborhoods was found.  Given our research on differences in response between opt-in and scientific surveying, we would be hard pressed to assure the client that the rating change came from a real decline in perspectives about neighborhoods when the decline could have come from a change in the survey method alone.

Next step is to test different weighting schemes

If we can determine the right weight to apply to Opt-in responses, we are hopeful that the differences we have seen in our lab will diminish. That way we will be able to encourage clients to move to the faster, cheaper Opt-in method without undermining the trend of scientific data they have built. Until then, the scientific survey appears to be the best method for assuring that a sample of respondents is a good representation of all community adults.


Author: Thomas Miller is president of National Research Center, Inc. [www.n-r-c.coma professional survey research and evaluation firm serving the needs of local government, schools and health care organizations.

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)

Loading...

About

The American Society for Public Administration is the largest and most prominent professional association for public administration. It is dedicated to advancing the art, science, teaching and practice of public and non-profit administration.

Leave a Reply

Your email address will not be published. Required fields are marked *