Ethics and Artificial Intelligence

EMERAC LLC

Ethics and Artificial Intelligence

August 17, 2018 Uncategorized 0

In the wake of a technological revolution, society is learning to work with and alongside rapidly evolving technology. Society relies heavily on the use of machines to augment various elements of day to day life. From the turn of the 21st century, computing machines have become essential to life for many, and in many cases, they have taken over tasks formerly belonging to humans. We are seeing a dramatic increase in the capabilities of machines, and a corresponding increase in reliance on them. An existing element of this is the question of artificial intelligence, or AI. With computers being taught to learn, understand concepts, and make decision, so follows a need to regulate them. As machines evolve to replace the jobs formerly occupied by humans, we must ask: What is the responsibility of policymakers and lawmakers? Should they be proactively working to monitor and regulate machine bias, before the AI technology is fully implemented, in order to avoid an otherwise inevitable ethical dilemma?

As AI technology continues to develop, computer scientists have already begun to explore the potential ethical implications. Natasha Singer discusses how the study of Computer Science is evolving to incorporate an element of ethics in her New York Times article, Tech’s Ethical ‘Dark Side’: Harvard, Stanford and Others Want to Address It. Leading universities across the country have introduced ethics courses, similar to those required by students of the medical field, into their Computer Science curriculum. The idea behind this being that the future of computer technology and AI, will require an incorporation of an ethical component. Because technology of the future will have the potential to make serious decisions, with major potential human impact, this is a reality that must not be ignored. Professors from renowned universities like Harvard, Stanford, and MIT discuss the implications of machine bias, and are jumping to the forefront to anticipate this potential. Singer raises questions surrounding rational of AI technology, and how a computer would decide to value things such as human life. As the technology of computers develops, ethical factors must too, be examined.

With educators and scientists taking a proactive approach to dealing with the social implications of technological advancements, the way our society works in correlation with AI is no longer theoretical. While robots are not yet driving our family vehicles, the technology to do so does, in fact, exist. Policymakers, lawmakers, and even diplomats must now step forward and address these issues, how they relate to society, politics, economics, and the global community. As James Rachels explains, “As moral agents, we should be concerned with everyone whose welfare might be affected by what we do.” With the enormous impact of technology on the world, management of its ethical implications is an unavoidable responsibility (Rachels, 200).

Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, an article published by the Harvard Journal of Law and Technology discusses how the trouble that AI technology development would impact the realms of research and development, control and privacy, and autonomy. Slowly AI technology is learning to learn, as well as make decisions, often times acting autonomously. As this technology expands there are questions that must be asked.

For example, today there exist autonomous vehicles, which function as a human driver would, with little to no necessity for a passenger. In the hypothetical future, where these autonomous cars common, or even universal, and a situation exists in which the car is faced with a decision between veering off track to save a displaced crowd of people, or saving the life of the single passenger, how will that decision be made? Is it a question of utilitarianism? Similarly, how should the car respond when given the choice between destroying a materialistically high-value item in it’s path, or potentially injuring the passenger within? What if the car were move off track to avoid a crash into, and subsequently demolish, a small shelter with someone inside who was not on the car’s radar? Finally, with a lack of regulation, how would money or power play into the way the car decides what happens? As James Rachel discusses, the problem of “equal concern” presents itself (Rachel, 109). Many of these decisions can scarcely be made by a human from an ethical standpoint. How is the machine equipped to do so? Each of these decisions must be made for the technology, and there should most certainly be regulating guidelines to help inform these resolutions.

The question of regulation is not only a domestic issue. Different countries are developing varying levels of technology and varying speeds. With the United States being a world leader, it only makes sense that lawmakers should be on the forefront of legislation surrounding AI technology. Negotiating among countries, of policies and international laws, as they apply to AI will not happen overnight. Cultural Relativism will play a role in how different governments may want to mold their legislation. And, as discussed by James Rachel, to call one country’s ideas surrounding the issue “correct” or “incorrect” could result in the disruption of existing harmony (Rachel, 16). However, as also suggested by Rachel, often times cultural differences are exaggerated, or misinterpreted (Rachel 21). If policymakers wait until AI presents a problem these small cultural differences could become difficult to overcome, whereas if policies surrounding AI are openly discussed and laws or rules are agreed upon, risk of escalation is minimized.

In addition to the scientists who are taking measures to develop an understanding of the implications of AI in the educational arena, many high-profile scientists, innovators, and investors have expressed a desire for government regulation. In 2014 Elon Musk voiced some concern regarding this issue:

I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is …. I’m increasingly inclined to think there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. (Scherer, 355)

Other notable figures in the field, including Bill Gates, and Steve Wozniak, and also expressed similar concerns (Scherer, 355). One speculation is that a special agency may need to be created to educate, ethically safeguard, and ensure against alienation of weaker parties.

With the encouragement from innovators in the field of AI, the possibility of actual proactive intervention may be on the horizon. John McGinnis, a law professor at Northwestern University, has created a proposal for a government entity devoted to subsidizing AI safety research. This proposal would provide government subsidies for AI safety research, as well as penalize AI scientists who choose to disregard research-based cautionary measures (Scherer, 398-399). While McGinnis’s proposal is a step in the right direction, it has not yet been refined, much less implemented. And there are still many who oppose government intervention at all.

Unfortunately, as stated by Professor Sahami, a former Google research scientist, “Technology is not neutral… Choices that get made in building technology have social ramification.” (Singer). Policymakers have an ethical responsibility to approach this matter proactively. By taking initiative, and addressing the potential ethical issues that AI presents now, lawmakers and policymakers have the ability control the way this technology is developed, as well as minimizing the potential harm or chaos that could result. Researchers speculate that it will be decades before AI is able to achieve levels comparable to human intelligence (Scherer, 384). However, jumpstarting the process will save society the possibility of AI driven pandemonium in the years to come.

 

Reference

Rachels, James. The Elements of Moral Philosophy. 7th ed., McGraw Hill, 2011.

Scherer, Matthew U. “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies.” Harvard Journal of Law and Technology, vol. 29, no. 2, 2016, pp. 354–400., doi:10.2139/ssrn.2609777.

Singer, Natasha. “Tech’s Ethical ‘Dark Side’: Harvard, Stanford and Others Want to Address It.” The New York Times, 12 Feb. 2018, nyti.ms/2BSOEJu.

Leave a Reply