Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone
The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.
By Tom Miller
December 2, 2014
Resident opinion represents the bottom line in government quality. Increasingly, local governments are engaging their residents with periodic surveys about community and service quality and then monitoring those opinions in a performance management system. How to get trustworthy opinions is up for grabs because learning what the people think isn’t as easy as it once was.
The survey research industry is in knots. New technologies, like smart phones and even the Internet, are posing challenges and opportunities to collecting Americans’ opinions. The old-fashioned way – telephone – is getting reliably poor response though still delivering only mildly skewed results. Many local governments have established performance management systems that rely on ‘scientific surveys’ (short hand for probability surveys that hew to established rules of survey quality) to track resident opinion about quality of community services and quality of life as well as public trust. Jackson, Mississippi is one example. These systems rely on opinion data that can be generalized from a sample of respondents to all community adults.
As the public opinion survey industry wrestles with new technologies for data collection, local governments must appreciate the strengths and weaknesses of simply, for example, posting a few questions on their website that receive a little or a lot of public input. The touchstone of a quality resident survey not only includes the right methods but the right outcome. The right outcome gives a picture of sentiment that takes the audience far beyond the survey results – where some number of residents answers the questionnaire – to the entire community. In that respect an accurate survey of, say 400 residents, is the telescope through which we can view the opinions of 40,000 residents.
When communities post questions on their website and get some interested residents to respond, the best they typically can claim is that “these are the opinions of the x number of people who responded.” That’s not much, but it is little more than what you might get at the live town hall. With all the hand wringing among public opinion research pros, there are starting to emerge some solutions for transforming the responses of, say, 200 opt-in Web responders into a meaningful representation of the entire community.
A couple of private sector firms – one is our own – have been working on developing options for local government clients to learn more about the community sentiment from opt-in postings on the Web. These new tech options harness the Web and, by applying the appropriate corrections to Web responses from self-selected residents, are helping to bring the survey telescope into focus so that a clear-enough picture of the full city or county perspective can be seen. Once the methods are reliably accurate, these opt-in survey results can become metrics worthy of monitoring as cities and counties continue to strive for enhanced livability. Until then, the scientific survey is the best way to understand more than the sentiment of, even, a large handful of residents.