Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone
The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.
By Zachary Curinga
June 28, 2024
“Research is formalized curiosity. It is poking and prying with a purpose.” –Zora Neale Hurston
All data tells a story. However, numerical data is unique in that it is often presented as something else after it has been extrapolated from its original context. This is because researchers want to remove as much bias from the data as possible. Yet, when data is presented this way, it is arguable that there is a contextual loss and that the human elements are absent. As data is depersonalized, it alters how it is seen and used; this inevitably prevents certain types of stories from being told, since the context has changed. Furthermore, knowledge related to programs and how they are administered can exist outside of the managing organizations, which has implications related to how data is gathered. Additionally, the recording of the data affects transparency.
How Data is Recorded and Aggregated
Currently, the United States’ statistical data structure is highly decentralized. There are administrative data collected by agencies, programmatic data, survey data and data collected by private and public partners. The United States has about thirteen agencies that manage statistical data. Managing the collection of data at this scale, of course, requires flexible cooperation in gathering data and ensuring its overall quality before it is released.
As of 2018, due to the “Foundations for Evidence-Based Policymaking Act,” each agency now contains a Chief Data Office (CDO), a Statistical Officer and an Evaluation Officer. Together, they are tasked with issuing guidance related to data acquisition, evaluation and the coordination of activities meant to strategically improve the collection and leveraging of quantitative data. Importantly, these positions are not politically appointed; all of this is part of an effort to increase the effectiveness of evidence-based policymaking.
Unfortunately, there is no comparable effort to increase or utilize qualitative data in the same way. As is often noted when comparing quantitative and qualitative data, this exclusion of qualitative data leads to a significant loss of depth when trying to understand program management and policy efficacy.
This is not to say that others are not trying to aggregate and use qualitative data. One notable exemplar is Syracuse University’s Qualitative Data Repository (QDR). The QDR is working to develop a universal standard for the “managing, archiving, sharing, reusing and citing [of] qualitative data.” It may be difficult for small departments to reconcile the cost if finances are strained and if a small number of faculty are utilizing the repository. Additionally, the mass aggregation of qualitative data may require ethical reconsiderations of the IRB approval process. Often data is anonymized to ensure that identifying information is not available, but, depending on the type of review, the destruction of the data after the project can be a condition for approval.
Transparency
The aggregation of qualitative data presents unique opportunities for transparency and accountability. Qualitative data takes many forms: a press report, an email correspondence, an interview, a recording, a photo, a video. Excluding official correspondences which are protected under the Freedom of Information Act (FOIA), these other forms are available to a certain extent, but they are not aggregated per se. After all, the availability of this data is not the same for every branch of government, and some may not consider the importance of qualitative data.
There are also artificial barriers to obtaining data that may be helpful. Consider the Supreme Court, a notable example of limiting access to qualitative data (which explains why there is no C-SPAN video of the proceedings). While one can listen to oral arguments live and through audio recording (e.g., oyez.com), there are no cameras allowed in the courtroom. For posterity and qualitative research, this type of data is incredibly important, and it is purposefully being withheld. Qualitative data relies on a respondent’s tone and body language; without the visual recorded, the true interpretation is up to the imagination. Without a visual recording, the public is missing certain subtleties: the tension in the room, the expression of the Justices as they expound hypotheticals and the visual reactions from the gallery and the counselors during arguments.
Conclusion
Although there is an infrastructure in place for data management, an apparatus for recording and gathering the data must be created. This will require that bureaucrats should assume the role of storyteller; perhaps there is already a model for this new role in the Federal Writers Project (FWP). During the Great Depression, the FWP employed workers and tasked them with documenting the life of everyday Americans through photography, audio recordings and interviews. Much of this work would gain critical acclaim and launch the careers of prominent researchers, such as Zora Neale Hurston.
Currently, the decentralized nature of data gathering could be ideal for such a project. Every county in the country has a cooperative extension office; these offices could house researchers who interview program recipients and bureaucrats alike to understand policy and management implications from a multitude of perspectives. Ultimately, this would bring transparency and humanity to data in a way that benefits researchers and emboldens those who may not ordinarily be able to make their voices heard.
Author: Zachary Curinga is currently a PhD student at Rutgers-Newark, School of Public Affairs and Administration, (SPAA). His research interests include nonprofit management, organizational change, public health nutrition, and disability equity. He can be contacted by email at [email protected]
Follow Us!