Widgetized Section

Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone

Regulatory Options for AI

The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.

By Craig P. Orgeron
May 19, 2023

Artificial Intelligence (AI) has become an increasingly important topic of discussion in recent months. With the exponential growth of AI technology, there are many concerns regarding the ethical, social and legal implications of its deployment in various sectors. As such, the question of how AI should be regulated is becoming increasingly urgent. While some argue that AI should be left unregulated to promote innovation, others argue that AI needs to be strictly regulated to protect society from potential negative consequences.

Firstly, proponents of unregulated AI argue that regulation could stifle innovation and limit the development of AI technology. Moreover, they argue that AI’s potential to enhance human progress and well-being is enormous, and any attempts to regulate AI could result in missed opportunities for innovation and growth. However, opponents of unregulated AI argue that without regulation, AI could pose a significant risk to society. They contend that AI technology is already being used to make decisions that can affect people’s lives, such as in medical diagnosis, financial transactions and hiring practices. Without proper regulation, AI could lead to unintended consequences that could harm individuals and society as a whole. For example, biased AI algorithms could lead to discriminatory practices that could perpetuate social inequalities. Moreover, AI could be used for malicious purposes, such as cyber-attacks, terrorist activities and other criminal acts.

Therefore, opponents of unregulated AI argue that the need for regulation is essential to protect society from the potential harms of AI technology. They argue that the regulation should focus on ensuring that AI is developed and deployed in a manner that is ethical, transparent and safe. Another argument in favor of regulating AI is that AI is already regulated in many other sectors, such as medicine, finance and transportation. These regulations have been put in place to protect public safety and ensure ethical practices. As AI technology becomes more pervasive and ubiquitous, it is increasingly important to ensure that it is subject to similar regulations to protect society from potential risks. However, there are also concerns about the potential unintended consequences of AI regulation. Some argue that regulation could lead to a “regulatory capture” situation, where regulatory agencies are influenced by the very industries they are meant to regulate, leading to ineffective or inappropriate regulation. Therefore, any regulations for AI must be carefully crafted to avoid these potential problems.

One potential solution to this issue is to create a global regulatory framework for AI. This could be similar to the international agreements that exist for other areas of technology, such as the Montreal Protocol for the protection of the ozone layer or the Paris Agreement for climate change. A global regulatory framework for AI would help to ensure that AI is developed and deployed in a manner that is consistent with ethical, social and legal norms across different countries and regions. It would also help to avoid a situation where countries have different regulations for AI, which could lead to regulatory arbitrage and potential risks to society. Another possible solution is to establish independent regulatory agencies for AI. These agencies would be responsible for overseeing the development and deployment of AI technology, ensuring that it is developed and deployed in a manner that is consistent with ethical, social and legal norms. These agencies could also be responsible for auditing AI systems to ensure that they are transparent, reliable and safe to use. The agencies could be modeled after existing regulatory bodies, such as the Federal Aviation Administration (FAA) or the Food and Drug Administration (FDA), which oversee the aviation and pharmaceutical industries, respectively. However, there are also concerns about the potential inefficiencies and bureaucracy that could arise from the creation of regulatory agencies. Moreover, the effectiveness of these agencies would depend on their independence from industry influence, which could be difficult to achieve in practice.

Another viable solution is to promote ethical AI development through voluntary industry standards and best practices. This approach would rely on industry self-regulation, rather than government regulation. The development of voluntary industry standards and best practices could be led by professional organizations, academic institutions or industry consortia. These standards and best practices could cover a wide range of issues, such as algorithmic bias, data privacy and explainability of AI systems. One advantage of this approach is that it would be more flexible and adaptable than government regulation, as industry standards could be updated more quickly to keep pace with the rapid changes in AI technology. However, this approach would rely on the willingness of industry stakeholders to voluntarily adopt and adhere to ethical standards and best practices, which may not always be the case.

In conclusion, the question of how AI should be regulated is a complex and multifaceted issue that requires careful consideration of the potential benefits and risks of AI technology. The most effective solution for regulating AI is likely to be a combination of different approaches, such as global regulatory frameworks, independent regulatory agencies and industry standards and best practices. Ultimately, the goal of AI regulation should be to ensure that AI technology is developed and deployed in a manner that is consistent with ethical, social and legal norms, and that it benefits all members of society.


Author: Dr. Orgeron has extensive technology experience in both the private sector and the federal and state level of the public sector. Currently, Dr. Orgeron is Professor of MIS at Millsaps College. Dr. Orgeron has served as an Executive Advisor at Amazon Web Services (AWS) and as the CIO for the State of Mississippi, and President of the National Association of State Chief Information Officers (NASCIO).

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Leave a Reply

Your email address will not be published. Required fields are marked *