Widgetized Section

Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone

Exploring AI, Robotics in Public Administration Ethics

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Richard Jacobs

Jacobs sept

The 2014 Survey: Impacts of AI and Robotics by 2025 ” recently reported findings solicited from 1,896 artificial intelligence (AI) and robotics experts. Many believe robotics will figure prominently in the nation’s social future as early as 2025 and view this trend positively, especially as it concerns the nation’s economy and jobs creation. Others expressed profound reservations about the impact that artificial intelligence, automation and robots may have upon the nation’s social future.

Siding with the optimists, the Office of Naval Research has awarded $7.5 million in grants for researchers at Yale, Georgetown, Brown, Tufts and Rensselaer Polytechnic Institute to build robots capable of making ethical decisions. In his DefenseOne.com article “Now The Military Is Going To Build Robots That Have Morals,” Patrick Tucker notes the grant money will be used to develop military drones that will possess an autonomous “sense of right and wrong and moral consequence.”

AI, robotics and public administration ethics
Researchers have already programmed robots to integrate into their decision-making process diverse sources of input, including unconscious parallel processing, the speed of the cycle, as well as multicyclic approaches to higher-order cognitive faculties. As Wendell Wallach, Stan Franklin and Colin Allen report in “A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents,” if successful, these robots will integrate a wide array of morally relevant inputs into their choices and conduct.

Of this capability, Tucker quotes Wendell Wallach, the chair of the Yale Technology and Ethics Study Group and author of Moral Machines: Teaching Robots Right From Wrong, observing: “One of the arguments for [moral] robots is that they may be even better than humans in picking a moral course of action because they may consider more courses of action.”

Now, this is some fascinating stuff! What if ASPA could program a robot to make ethical decisions for public administrators?

Doing so would be a relatively straightforward enough task. ASPA programmers would jam pack the robot’s mind with its Code of Ethics, the roles and responsibilities borne by public administrators, as well as multiple confounding factors (for example, cultural assumptions and political ideologies). Programmers would add the broad array of ethical dilemmas that public administrators typically confront along with alternative scenarios and resolutions.

Programming these more concrete factors into the ASPA robot is one matter. But what about programming more abstract factors—like moral logic—into it? The good news, Wallach et al. report, is that researchers have already made some significant strides in this regard.

If ASPA’s robot was to think and conduct itself armed with all of this cognitive complexity, public administrators wouldn’t have to worry about making ethical decisions. In fact, they wouldn’t have to make ethical decisions at all. All they’d have to do is input the dilemma’s relevant facts, let the ASPA robot do what it has been programmed to do, and then, implement the robot’s answer.

There is one problem…

The ethical decision-making process involves three distinctive intellectual operations: operational morality, functional morality and full moral agency. Tucker again quotes Wallach:

“Operational morality is what you already get when the operator can discern all the situations that the robot may come under and program in appropriate responses…. Functional morality is where the robot starts to move into situations where the operator can’t always predict what [the robot] will encounter and [the robot] will need to bring some form of ethical reasoning to bear….”

Researchers seem to be getting operational and functional morality under control. What about that third aspect, full moral agency?

Almost two decades ago, Daniel C. Dennett raised this question by creating a fictional robot, HAL. In the article “When HAL Kills, Who’s to Blame?,” Dennett argues that if a machine is to be blamed, it must have a mens rea (or guilty state of mind) which includes motivational states of purpose, cognitive states of belief or a non-mental state of negligence. Moreover, to be morally culpable, a machine must possess beliefs. At the time, Dennett didn’t believe such machines existed. However, he mused, one day they just might.

Programming robots with cognitive complexity enables them to make complex decisions but does not necessarily capture the substantive aspects of the human ethical decision-making process. For example, conflict rules are more simplistic than the rich dynamics present in human ethical decision-making. They ask: “Can morality really be understood without a full description of its social aspects…. [and what about] the kinds of delicate social negotiations that are involved in managing and regulating the conflicts that arise among agents with competing interests?” For example, what about non-verbal emotional expressions?

Perhaps one day, machines will be programmed to mime human thinking, creativity, problem solving and innovating. As Pamela Rutledge stated in “Will Robots Take Our Jobs by 2025? Tech Experts Are Evenly Split,” “An app can dial Mom’s number and even send flowers, but an app can’t do that most human of all things: emotionally connect with her.”

Concurring, Tucker quotes the artificial intelligence expert, Noel Sharkey, who maintains that rules follow a human designer’s ideas of ethics but may not be responsive in an ethical dilemma terms of how humans care. Full moral agency requires, for example, understanding and empathizing with others as well as knowing what it means to suffer.

John P. Sullins disagrees in “When Is a Robot a Moral Agent?” Like robots, human beings may not possess full moral agency. After all, Sullins asserts, many people’s “beliefs, goals, and desires are not strictly autonomous, since they are the products of culture, environment, education, brain chemistry, etc.” In “Artificial Moral Agency in Technoethics,” Sullins also asks: “Apart from the question of what standards a moral agent ought to follow, what does it mean to be a moral agent?” Until philosophers clarify what a moral agent is and exactly which moral standards should prevail, researchers cannot program into an artificial autonomous agent—like a robot—what it takes to be one.

The problem confronting the experts now is that of full moral agency. What is it and what does it require?

Perhaps an ethical robot isn’t the most pressing matter

With AI and robotics experts struggling to answer those questions, perhaps ASPA would be wasting time, money, effort programming a robot to make ethical decisions for public administrators. Doing so might be nothing more than a “fool’s errand,” but for a different reason.

Consider the data J. B. Wogan reports in “Want to Govern? Survey Says, Attend Policy School.” According to a recent Government Exchange Research Community survey of senior state and local public officials:

  • 88 percent of respondents attended graduate school in a government-related field and indicate that their coursework prepared them for their current careers in government.
  • Only 48 percent took ethics courses.
  • Of those who did, only 16 percent rated those courses among their top three most helpful courses. 

Perhaps the Office of Naval Research grant money would be put to a better use by programming robots to teach aspiring public administrators more helpful ethics courses, like ones that would develop ethical competence.

 

Author: Richard M. Jacobs is a professor of Public Administration at Villanova University, where he teaches organization theory and leadership ethics in the MPA program. Jacobs can be emailed at [email protected].

1 Star2 Stars3 Stars4 Stars5 Stars (3 votes, average: 5.00 out of 5)
Loading...

Leave a Reply

Your email address will not be published. Required fields are marked *