The Complexity of Social Problems and Big Data. Part II.
The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.
By Mauricio Covarrubias
January 13, 2023
Although there have been important advances, more research is required regarding the possibilities of “big data” in the public sector. Basanta Thapa mentions that “the existing literature usually offers prognoses rather than findings”. In this sense, “big data” is frequently presented (i) as an improved tool for monitoring and evaluation; (ii) as an analytical tool for more efficient resource allocation and; (iii) as a way of providing more up-to-date data for decision making.
However, in general terms, data science projects are related to the formulation of public policy and, therefore, with improving the efficiency and effectiveness of public intervention, their application tends to focus on the decision-making phase.
Big data and defining policy problems
One aspect of the analysis of the effects of “big data” that we want to highlight in this second part of the column has to do with its contribution to the definition of complex problems as a line of research that we consider to be incipient. The point implies that “big data” analysis is present from the stage of identification and definition of the social problem that public policy seeks to resolve, helping to detail its possible causes and consequences.
This approach differs from the prevailing idea that data science projects are almost exclusively interrelated with the decision-making stage of the public policy life cycle. To this point, it is recommended to consult the document: Responsible use of AI for Public Policies: Data Science Manual edited by the Inter-American Development Bank which finds that AI does not replace public policy, because by itself it does not solve the social problem, but its function is to assist by providing information for decision-making.
From the perspective of Viktor Mayer-Schönberger and Kenneth Cukier, “big data” represents a revolution that will transform the way we live, work and think. Since government is a knowledge-based business, the “revolutionary” effect of “big data” in the public sector could be significant. Specifically, with respect to “wicked problems,” according to Greg Satell, author of Yes, Big Data Can Solve Real World Problems, the promise of “big data” analytics is to cut through complexity and unravel its wickedness. Moreover, “big data” promises to reduce uncertainty by providing foresight through predictive analytics.
By applying descriptive, diagnostic and predictive analytical techniques to large volumes of data, hidden patterns, correlations and trends can be revealed that could not have been obtained otherwise. By obtaining knowledge of the phenomena of reality that previously remained hidden due to the complexity of their measurement and analysis by other means, the main problems of public policy could be better known and solved.
Epistemological definition of the problems
Arlena Jung, Rebecca-Lea Korinek and Holger Straßheim refer that, as a novel means of producing meaningful knowledge for policy decisions, “big data” can change the way problems are defined, since part of the problem definition, it will move from the realm of politics to the realm of academia. Thus, beyond the fact that the definition and complexity of the problems has a lot to do with the process of “social construction”, progress could be made in a more objective definition of them, through a better knowledge of their “material causes”. In other words, deepen the epistemological definition of the problems.
One of the possible effects of the above, according to Basanta Thapa, would be the change in the possession of ‘epistemic authority’, in the sense that whoever is able to establish his definition of the policy problem as the dominant definition gains ’problem ownership’ and, consequently, can influence the course of action to address the problem. For his part, in Politics and policy expertise: Towards a political epistemology, Holger Strabheim points out that as new ways and places of knowledge production emerge, e.g. Big Data Analytics, changes in existing knowledge regimes are inevitable, thus entailing shifts in power and access.
Big data and causal inferences
Finally, it is important to note that although large datasets provide the opportunity to learn about quantities that were unfeasible just a few years ago, “big data” alone is insufficient to solve society’s most pressing problems. Justin Grimmer of Stanford University, argues that the opportunity for descriptive inference, associated with “big data”, creates the chance for political scientists to ask causal questions and create new theories that previously would have been impossible: “Therefore, although descriptive inference often is denigrated in political science, our field’s expertise in measurement can make better and more useful causal inferences from big data”. Based on this, one of the main challenges is to develop robust experiments or research designs that make the utility of large datasets even more potent.
Author: Mauricio Covarrubias is Professor at the National Institute of Public Administration in Mexico. He is co-founder of the International Academy of Political-Administrative Sciences (IAPAS). He is the founder and Editor of the International Journal of Studies on Educational Systems (RIESED). Coordinator in Mexico of the TOGIVE Project: Transatlantic Open Government Virtual Education, of the ERASMUS+ Program of the European Union. Member of the National System of Researchers of CONACYT. He received his Ph.D. from the National Autonomous University of Mexico. He can be reached at [email protected] and followed on Twitter @OMCovarrubias
(7 votes, average: 5.00 out of 5)
Loading...
The Complexity of Social Problems and Big Data. Part II.
The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.
By Mauricio Covarrubias
January 13, 2023
Although there have been important advances, more research is required regarding the possibilities of “big data” in the public sector. Basanta Thapa mentions that “the existing literature usually offers prognoses rather than findings”. In this sense, “big data” is frequently presented (i) as an improved tool for monitoring and evaluation; (ii) as an analytical tool for more efficient resource allocation and; (iii) as a way of providing more up-to-date data for decision making.
However, in general terms, data science projects are related to the formulation of public policy and, therefore, with improving the efficiency and effectiveness of public intervention, their application tends to focus on the decision-making phase.
Big data and defining policy problems
One aspect of the analysis of the effects of “big data” that we want to highlight in this second part of the column has to do with its contribution to the definition of complex problems as a line of research that we consider to be incipient. The point implies that “big data” analysis is present from the stage of identification and definition of the social problem that public policy seeks to resolve, helping to detail its possible causes and consequences.
This approach differs from the prevailing idea that data science projects are almost exclusively interrelated with the decision-making stage of the public policy life cycle. To this point, it is recommended to consult the document: Responsible use of AI for Public Policies: Data Science Manual edited by the Inter-American Development Bank which finds that AI does not replace public policy, because by itself it does not solve the social problem, but its function is to assist by providing information for decision-making.
From the perspective of Viktor Mayer-Schönberger and Kenneth Cukier, “big data” represents a revolution that will transform the way we live, work and think. Since government is a knowledge-based business, the “revolutionary” effect of “big data” in the public sector could be significant. Specifically, with respect to “wicked problems,” according to Greg Satell, author of Yes, Big Data Can Solve Real World Problems, the promise of “big data” analytics is to cut through complexity and unravel its wickedness. Moreover, “big data” promises to reduce uncertainty by providing foresight through predictive analytics.
By applying descriptive, diagnostic and predictive analytical techniques to large volumes of data, hidden patterns, correlations and trends can be revealed that could not have been obtained otherwise. By obtaining knowledge of the phenomena of reality that previously remained hidden due to the complexity of their measurement and analysis by other means, the main problems of public policy could be better known and solved.
Epistemological definition of the problems
Arlena Jung, Rebecca-Lea Korinek and Holger Straßheim refer that, as a novel means of producing meaningful knowledge for policy decisions, “big data” can change the way problems are defined, since part of the problem definition, it will move from the realm of politics to the realm of academia. Thus, beyond the fact that the definition and complexity of the problems has a lot to do with the process of “social construction”, progress could be made in a more objective definition of them, through a better knowledge of their “material causes”. In other words, deepen the epistemological definition of the problems.
One of the possible effects of the above, according to Basanta Thapa, would be the change in the possession of ‘epistemic authority’, in the sense that whoever is able to establish his definition of the policy problem as the dominant definition gains ’problem ownership’ and, consequently, can influence the course of action to address the problem. For his part, in Politics and policy expertise: Towards a political epistemology, Holger Strabheim points out that as new ways and places of knowledge production emerge, e.g. Big Data Analytics, changes in existing knowledge regimes are inevitable, thus entailing shifts in power and access.
Big data and causal inferences
Finally, it is important to note that although large datasets provide the opportunity to learn about quantities that were unfeasible just a few years ago, “big data” alone is insufficient to solve society’s most pressing problems. Justin Grimmer of Stanford University, argues that the opportunity for descriptive inference, associated with “big data”, creates the chance for political scientists to ask causal questions and create new theories that previously would have been impossible: “Therefore, although descriptive inference often is denigrated in political science, our field’s expertise in measurement can make better and more useful causal inferences from big data”. Based on this, one of the main challenges is to develop robust experiments or research designs that make the utility of large datasets even more potent.
Author: Mauricio Covarrubias is Professor at the National Institute of Public Administration in Mexico. He is co-founder of the International Academy of Political-Administrative Sciences (IAPAS). He is the founder and Editor of the International Journal of Studies on Educational Systems (RIESED). Coordinator in Mexico of the TOGIVE Project: Transatlantic Open Government Virtual Education, of the ERASMUS+ Program of the European Union. Member of the National System of Researchers of CONACYT. He received his Ph.D. from the National Autonomous University of Mexico. He can be reached at [email protected] and followed on Twitter @OMCovarrubias
(7 votes, average: 5.00 out of 5)
Loading...
Follow Us!