Worldwide, there’s an emphasis on gaining digital skills. Data science and artificial intelligence (AI) courses are gaining in popularity. There are even coding courses for preschoolers. We are getting ready for a world that increasingly uses AI in our work and lives.

The digital upskilling initiative is a good one and needed. However, we feel that the almost obsessive focus on acquiring digital skills has led to a situation where we may fail to understand the need for moral upskilling.
As AI gains importance in decision making in various aspects of life, there is a greater scrutiny on whether AI is “ethical”. For example, will the AI system that helps filter job resumes be biased against a minority race because previous position holders were not from that race? Will facial recognition technology lead to the wrongful arrest of some people? Will the algorithm recommend certain beauty products to the consumer because they bring higher profits for the company?

Because AI gets involved in decision-making, a consensus has grown that machine needs to be taught ethics so their decisions will be rendered ethically. However, it is our view that in this discussion on AI ethics, we have been misled if we think that AI can develop its moral compass and subsequently can choose to be ethical.

Why is it that we think that AI can act and reason ethically on its own? The big tech industry uses a narrative that emphasises the idea that technology can be used to solve most problems that we encounter in society and business – referred to as a “techno-solution” mindset. As a result, this typical Silicon Valley mindset has penetrated governments and businesses that also ethical dilemmas can be solved if one has the right technology. For example, in the 2018 congressional hearings in the United States, Facebook CEO Mark Zuckerberg’s response to most lawmakers’ questions was that AI can be used to solve issues ranging from hate speech and discriminatory ads to fake accounts and terrorism content.

Because of this “techno-solution” mindset, we have come to see ethics almost as synonymous with transparency and intelligibility, which, interestingly, are exactly features that can easily be optimised by modifying technological characteristics to self-learning algorithmic solutions. Take, for example, Google’s ethics-as-a-service message, which conveys to business leaders the idea that if algorithms revealing unethical decisions can be fixed by working on those specific technology features.

This kind of mindset makes us expect no less from AI that it can differentiate between wrong and right decisions. And, because of this ability, we reason that AI is also responsible for its decisions. For example, a recent report from the UN Security Council revealed that an autonomous drone attacked humans, without receiving the specific order, in Libya last year. When the news broke, the image of “killer robots” was conjured in the minds of many. This idea underscores the idea (and fear) that we believe that AI can make decisions in autonomous ways and thus is the one in charge of acting in either good or bad ways. However, in our view, this kind of logic is tantamount to saying that a gun fired itself after a person pulled the trigger. It showcases that we have forgotten that humans initially designed the drone to launch attacks. Relevant information that was keyed into the drone system was included by humans. The ethical choice to design and adopt these drones lies in the hands of the human, not the algorithm. Therefore, because AI did not decide to commit a bad deed intentionally but simply acted upon decision-making rules coded by humans, it cannot be labelled as a bad machine that we can blame.

So, when it comes to ethics, machines cannot correct us and take charge to make better and more ethical decisions than humans can do. The reason is simple: AI acts as a mirror to our biases. It reflects bias when humans show bias. If datasets include human bias, machine learning will act upon these biases – as the drone had learned – and even optimise those biases in its display. Just because AI is called “intelligent” does not mean it can be more ethical than humans. A recent illustration of how biased data lead AI to act in unethical ways was the UK’s decision to employ AI to predict the results of A-levels students in the UK based on how the secondary schools have scored historically. However, the result was that many students’ results were downgraded, particularly those from poorer schools. In an ironic twist, the use of AI, meant to reduce teachers’ bias in predicting the students’ results, thus created a new bias and revealed outcomes that we regard as unethical.

All of this means that we need to stop thinking that we will be able to trivially design machines that are more ethical than we are in the same way a programmer can create a chess program that is far better at chess than they are. It is not from machines that we can expect more responsible behaviour, but from people’s choices concerning intelligent technologies. For this reason, we believe that as managers seek technological improvements that make data more easily interpretable, they should also be trained to be more aware of and able to deal with ethical business dilemmas. Where AI reveals unethical outcomes, managers should be trained to recognise which human bias was underlying the machine’s decision. This way, AI that amplifies our own biases can be used as a learning tool to help managers recognise blind spots within their organisation. Promoting awareness of the ethical challenges and learning the potential biases with AI will lead to a more ethically aware company. At the same time, the firm enhances its ability to use technology in more responsible ways.

When the usage of AI increases, it is the moral compass that we humans are relying on to guide us in making decisions. Without one, both machines and humans would be at a loss.

The article is an abridged version of the one first published in The Business Times.