“Now I am become Death, the destroyer of worlds”
– Robert Oppenheimer quoting from the Bhagavad Gita upon witnessing the first nuclear explosion
The ethical problems relating to artificial intelligence are something of an umbrella term encompassing morality-related concerns when designing and using AI systems, as well as concerns for the behavior of the algorithms themselves. Since in the past couple of years the field of AI has gained immense popularity, it has attracted the attention of both thinkers from different, but related fields, such as physics, philosophy, cognitive science, biology etc, and the general public. Needless to say that because the concerns come from voices with various levels of training in the field of computer science and understanding of the workings of algorithms, they are not all equally justified. We shall endeavor to attempt framing the most high-profile ethical debates regarding AI systems at the time of writing.
Morality and Ethics
Morality is perhaps, alongside metaphysics, one of the oldest branches of philosophy. In the Western tradition, indeed it constituted the basis for what we now refer to as the beginning of the philosophical cannon. Many of the Socratic dialogues concern themselves with the issue of how one ought to behave in order to live a meaningful life. Later on, in the Stoic school which started in the Greek world and became popular throughout the latter days of the Roman Republic and its subsequent Empire, the distinction between what is right and what is wrong was seen frequently as the main purpose of philosophizing. Perhaps a little counterintuitively, with regards to how philosophy is tended to be seen as a purely theoretical, esoteric and solely academic exercise, the Stoics argued that it is an instrument which ought to be applied for the purpose of bettering our lives.
The concept of ethics is strongly related to morality, in the sense that they are both concerned with the difference between right and wrong, however, the literature emphasizes some subtle discrepancies which we ought to consider. It should be noted that slight contradictions between definitions given by different works and authors may appear, however, the general consensus is that morality (lat. “mos” means “custom”) is much more personal in character. It refers to an individual’s own principles, arising from the application of their own judgement of what good and evil are. Morality often has a religious connotation, hence the way it decided between right and wrong can appeal to spiritual arguments, not necessarily to reason alone: for example, killing is wrong simply because religion dictates that all life is sacred, etc. Moral behavior is influenced by culture and society to a much higher degree. On the other hand, ethics (gr. “ethos” means “character”) present with an aspect of externality. They go beyond the judgement of an individual and tend to be considered universal. Reasoning is much more involved here in creating a set of principles that pertain to a specific profession, organization or group. Thusly, they are much more uniform when compared to morals and, since they tend to pertain to professional work, do not vary so much according to culture and religion. Ethics are bound by their context and are consistent within it. While an oversimplification, it is often stated that ethics are the morality of a particular group.
Consequentialism and Deontology
Let us turn to the difference between right and wrong in itself. There are two major, opposing schools of thought that define paradigms for judging the morality of actions. The first of them is called consequentialism and considers that actions may only be labelled as either good or evil after observing the impact they have upon the world. Thus, depending on the end, it can be used to justify the means. Consequentialism is very appealing due to its reliance on the concept of greater, common good. How what exactly that means is decided can prove to be quite problematic, however, the most prominent version of consequentialism, utilitarianism, proposes that it can be equated to the greatest possible happiness for the largest possible number of people. The tradition started by Jeremy Bentham and continued with John Stuart Mill states that the only thing that is valuable in itself is happiness. The theory of right action in this case refers to maximizing what is valuable: the better action from two alternatives is always the one that produces more happiness or minimizes suffering to a greater degree. Utilitarianism is based on the universally shared value of happiness, presents a simple and intuitive theory that is oriented towards universal application in real life and is egalitarian in the sense that the happiness of each person counts for as much in the utilitarian calculation as anyone else’s. However, any theory in philosophy is open to attack and this is no exception. One can contest whether only happiness is valuable, for instance. The theory of right action can also be brought under question as various scenarios can be considered in thought experiments where the desires of many people are less than noble.
On the other spectrum of possible outlooks to morality lies deontology, which claims that the consequences do not matter because moral judgement is contained in the acts themselves. Having been disappointed by the ideas that ethics and morality should be subjective, Immanuel Kant, one of deontology’s most important proponents, argued for a theory of objective moral behavior. He claimed that morality should be justified by reason and since the latter leads us to an objective view of the world, it should also lead to an objective view of the categories of good and evil. In order to provide an prescriptive difference for what is right and what is wrong, Kant proposes the theory of the so called ‘categorical imperative’, which can be thought of as a system of three axioms that always apply, no matter the circumstances, and with no grey areas. The first maxim is that all undertaken actions should have universality. One should perform an action only after being certain that it would be alright for everybody else to do it all the time. If the universality of the action paints of a picture of a world we would not desire to live in, then it cannot be justified even once. The second maxim dictates that every person must be treated as an end rather than a means to an end all the time and in all circumstances. This stance comes in and contrasts with the vision of utilitarianism, briefly described above, since people cannot be manipulated or lied to irregardless of the concern for the greater good. Kant argues that every person has to act as their own moral agent. The third axiom is that we should always behave as though we are the absolute moral authority in the world. Even though some people may believe that already in various degrees of self-delusion, what Kant was referring to was that the world should be better off if everybody acted on our example.
A famous critique of deontology is a thought experiment in which a murderer comes armed to our house and politely enquires about the whereabouts of our children so that they may proceed to killing them. Under the categorical imperative presented above, we would be obligated to answer truthfully, which is understandably a horrifying aspect for most people. Kant agreed that we should answer the question with the truth since each person is, as previously stated, their own moral agents, we are responsible for our actions alone and not those of the murderer. However, there is also nothing to prevent us from closing the door and calling the police.
Depending on which school of thought we subscribe to, our responses to the ethics of AI misuse will be different and possibly at odds with those expressed by the other paradigm. Consequentialists will tend to emphasize that tools are just that and it is in their application by people that moral judgement should lie. Depending on the outcomes, they are either instruments of good or of evil, but the responsibility should lie with the humans using them. Deontologists could argue that some tools are built with nefarious purposes in mind and, even before their use, they represent an externalization of evil intent and the world would not be a better place if such tools were to be propagated indefinitely. Moreover, AI algorithms should be used when they can produce unfavorable results since, even though they are by and large benefic, this breaks the principle of universality. Of course, not all of consequentialism can be reduced to utilitarianism and not all of deontology is solely represented by Kant’s categorical imperative, but we believe the examples illustrated above are helpful in understanding why people tend to take such different positions.
The Special Place Of Human Dignity
Whether from religious or secular perspectives, it has been widely accepted throughout history that humans beings possess characteristics not found in any other living beings and are thus deserving of a special place in the grand hierarchy of dignity. Regardless of it being because humans share a special destiny in many of the world’s religions or because evolution has endowed us with the intelligence needed to reason about our own circumstances, many believe that we should treat one another with a special kind of empathy that simply cannot be achieved by AI algorithms. Although we can hardly argue for the extension of this principle to all circumstances, it is nevertheless a point often made when discussing the ethics of using AI.
It has been argued and widely accepted (Hibbard: 2015) that if artificial intelligence were to impact a greater number of the aspects of our lives and by a higher degree, the ethical responsibility of their creators is to be transparent about the engineering process. While in computer science this is largely equated to adopting an “open source” philosophy and there are already efforts being made by companies such as OpenAI in this sense, it should be noted that just because code is accessible it does not automatically mean it is also comprehensible. This raises questions about the transparency of the engineering efforts. They have been countered in recent years by the popularization of books and other media explaining the workings of machine learning in various degrees of detail to the general public. Whether or not said efforts are enough to countered what can, with good reason, be perceived as a lack of transparency in AI development is still an open debate. To be fair to all faces of the argument, many algorithms are black boxes and there are cases when the scientists themselves cannot explain how certain results have been reached.
Besides transparency, responsibility is also strongly linked to accountability. There are valid concerns for how certain technologies may be used were they to become open source and available. Face recognition software, for example, could be employed in the developing what many people may characterize as an Orwellian, police state. Already there have been cases of employing facial recognition aimed at the public space, most notably in China. At the time of this writing, our most powerful natural language processing model yet, GPT-3 raises concerns about its ability to write “fake news” directly aimed at each individual reader based on their biases, thus furthering the spread of misinformation and political polarization and helping destabilize society.
Bias In Machine Learning
The public attention and a great deal of the focus of the debate around the ethics of AI revolves around cases when the systems we rely on produce obviously biased results, disfavoring certain groups of people. While some of the instances of bias constitute problems in recognizing the speech or faces of certain ethnic or racial groups, others are far more serious as AI systems begin to be involved in financial decisions, medicine, and law. Certain banks using algorithms to calculate credit scores have already noticed that their systems produce results which greatly favor men over women. The latter have also been discriminated against by ML software employed by Amazon to handle the hiring and recruitment process. What this illustrates is that there was biased in the data used to train those systems. This is to be expected, however, since historical data, just as the name suggests, presents with the trends that society has been through.
There is no clear consensus yet on how to address this ethical dilemma. It is though clear that it should be prioritized especially since more and more people use ML-based software with little or no technical knowledge.
As it was to be expected, there have been accidents involving self-driving cars. The public opinion has quickly mobilized to criticize the reliability of such systems, even though many have underlined that many more accidents, by percentage, have been caused by human drivers. These incidents, however isolated, draw much attention in no little part because of the difficulty of assigning legal liability. Even though the control is fully given to the AI system, in many cases a human driver is also inside the car. This further complicated the question. If, for example, a pedestrian is hit while being in the middle of the road, as it happened in 2018 to Elaine Herzberg in Arizona, is the human driver to blame, the pedestrian, the company or developers who created the AI system or the government? Autonomous vehicles cannot become widely adopted until there are laws in place regulating such matters.
Much of the fascination for the field of artificial intelligence is due to the existence of the singularity theory, which states that, there will come a point when AI will surpass the intelligence of its human creators, beyond which we will no longer be able to contain or control it. There arises the ethical question of allowing research to bring humanity to this event as well as how we might make certain of the cooperation of benevolent AI. Not to mention the questions of whether or not we should ‘coerce’ a system into doing anything if it is intelligent enough to be classified as self-aware or conscious. It is undeniable that at present the singularity and its subsequent concerns are much closer to science fiction than science fact. Trans-disciplinary studied from philosophy, cognitive science, neurology, psychology, and linguistics all point to the fact that there are fundamental differences between how AI systems learn and how humans do. These may very well be differences of nature, not differences of degree. It could be argued that it is rather odd to consider the possibility of consciousness and power of will arising from complex mathematical functions, because this is what machine learning is at its core, especially since there is currently no theory to even satisfactory describe what consciousness or will actually are.
We have started with a distinction between the two main possibilities of looking at morality to better understand the frameworks within which the answers to the ethical questions posed by AI will be answered. What those answers will actually be depends a great deal on our public society, the education of our experts and our non-technical people, public opinion, and a great deal of other interwoven and highly complex factors. Let us remember that Nobel has been horrified when reading a premature obituary where he was described as having profited from the sales of weapons. AI system did not appear and are not developing in a vacuum. All of us involved have the responsibility for how we use our tools. No matter how powerful they are.
Further Reading 📚📖
A. C. Grayling — The History Of Philosophy, 2019
Bill Hibbard — Ethical Artificial Intelligence, 2015
Jack Stilgoe — Who Killed Elaine Herzberg, Spinger International Publishing, 2020
John Stuart Mill — On Liberty, 1859
Will Knight — Google’s AI Chief Says Forget Elon Musk’s Killer Robots, And Worry About Bias In AI Systems Instead, MIT Technology Review, 2019