Widgetized Section

Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone

Technology and Social Justice

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Leslie T. Grover
February 12, 2020

As public administration professionals, we are often tasked with remaining neutral and using data-driven methods to make decisions, design programs and inform policy recommendations.  Increasingly we are inundated with new apps, data software and best practices on being technologically driven and social justice-oriented. The Ford Foundation has even created the field of, “Public interest tech,” the goal of which is to use technology for the public good. The field seeks to fuse technology and social justice to improve access, protect online freedoms, improve security and reform the criminal justice system.

While these goals are admirable, they gloss over a basic social justice truth. Before all of these goals can be accomplished, there must be an actionable plan to address and dismantle the disparity inherent in the ways we use technology in the name of objectivity and data-driven work. In other words, the good work and input of professionals, community members, coders and public servants must be enacted within a framework that can be feasibly implemented and that takes into account that sometimes social justice work can create negative externalities that may outweigh its intended good consequences. This is the nature of social justice, particularly when it comes to technology.

Consider online harassment. In current discussions of public interest tech, the focus is on safe spaces, social media handling of hateful language, surveillance, freedom of speech and expression, privacy and possibly getting local authorities involved in extreme cases. However, this approach does little to proactively address online harassment. Further, the social consequences of such actions are little to none. Online harassers can create new accounts if their current accounts are suspended. And the penalties for online harassment do not serve as deterrents in any sense of the word. While online users can sometimes be revealed, there are bevies of products that allow users to hide their locations and identities.

The current tech-cum-social justice approach assumes a generally safe space for all users, with only peripheral instances of attacks, and the current social justice bent is to educate, maintain safety and expand the voices of those without access. However, the data indicate a different approach is needed. Hate language, cyberbullying and online attacks, for example, are more common than those at the helm of framing the interplay of technology and social justice seem to realize. For example, Pew Research Center reports 40% of adults have experienced online harassment, and 3 of every 4 adults have witnessed someone being bullied, threatened or mistreated online. These data are the ones reported. Like many other instances, there are surely unreported instances of harassment, particularly by marginalized groups.

Without intending to, this approach to technology and social justice has already created a framework of privilege, with a disconnect among those who are harassed, those who witness the harassment and those who report it, simply by assuming a safe environment with instances of harassment. This approach occurs, rather than framing the online environment as varied terrain where safety is always the ideal, but never assumed to exist for all users.

As researchers, practitioners and other public administration professionals, we are stakeholders and participants in the public interest tech field. The way we use data, interpret it, code it and apply it has significant implications for social justice. To this end, there are at least three major ways for us to support social justice through technology.

First is to realize that the online environment is a reflection of our offline society, and it reflects our biases, fears and inequalities. At the end of the day, all of our technology is created by people, and technology reflects society, including the parts that can marginalize and hurt others.

Second is to realize the role of coding in setting the stage for apps, data stewardship and other technological tools. Coding is the starting point for many of our government databases, apps and other tools used to interact with the public. In Race After Technology, Ruha Benjamin raises an important critical point: that coding is the opportunity for us to address some of our biases in the name of social justice.

Third is to understand that in order to create a more equal online environment, some voices and experiences may need to be raised and protected in order to create a safer and more just technological playing field. As in reality, there are many voices and experiences that must be elevated in order to put them on par with voices and experiences always at the forefront of society. To be more effective in social justice work, we must realize the online environment is not neutral, equal or objective. We must seek who is not at the table, who is not represented, and what new subsets of groups may need to be included in the future.


Author: Leslie T. Grover, Ph.D. is the founder of Assisi House, Inc, a non-profit research firm, and Associate Professor of Public Policy and Public Administration at Southern University in Baton Rouge, LA. Her twitter handle is ltg_is_me.

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)
Loading...

Leave a Reply

Your email address will not be published. Required fields are marked *