Inequality has never been viewed as a good thing. Be it in the treatment of people or in the allocation of resources, inequality is frowned upon at best and despised upon at worst. Highlighting inequality among different races or socio-economic classes also serves to increase tensions among relationships.
Inequality is a sensitive topic. However, its consequences are very real. In the middle of the first wave of the COVID-19 infection, the death rate of black Americans was more than twice that of white Americans. In Singapore, at one point in time, the large number of infections among low-wage migrant workers grabbed national headlines. Similar disparities exist in other countries, with the poor and minorities being most affected by the virus.
There are numerous factors behind these disparities in health outcomes, such as healthcare access and work environment. However, human bias may also play a small role. In a US study by Hoffman et al, some human doctors are less likely to recommend potentially lifesaving screening procedures to black people and perceive them as experiencing less pain.
Will the use of algorithms help to reduce health disparities? To be clear, artificial intelligence (AI) and algorithms can also be biased. However, AI and algorithms are more easily corrected than humans when biases do occur. It takes a long time to change mindsets, but it takes a short time to change algorithms.
In fact, algorithms are increasingly used in the healthcare sector. Singapore’s Integrated Health Information Systems (IHiS) developed a Business Research Analytics Insights Network (BRAIN) that pulls information from various sources and helps determine people who are most at risk of getting diabetes. Google and HCA Healthcare, a hospital chain from the United States, recently announced a partnership to use data to improve operations and safety. In 2016, Cleveland Clinic in the US worked with Microsoft in using AI to identify at-risk patients for cardiac failure. All in all, employing more AI and algorithms have the potential to reduce doctors’ workload and hence burnout, as well as health disparities.
There exists a paradox, however, in that people and patients are generally reluctant to allow AI and algorithms to make important, medical decisions. They think algorithms are devoid of empathy. This is a pity, because when the workload is overwhelming, such as in times of natural disasters or the pandemic, algorithms can alleviate the situation.
To reduce people’s aversion to algorithms in the healthcare context, my colleagues from US universities and I found that the key may lie in making the threat of inequality more prominent.
In studies involving participants from Singapore and the US, the team found that emphasising inequality in medical outcomes increases the preference for algorithm decision making.
To illustrate, when participants read about how COVID-19 affects races and economic classes differently, and were then asked to choose between a hospital where a human makes the decisions or a hospital where an algorithm makes the decisions, they showed a greater preference for the latter.
Participants also showed a greater preference for the algorithm-driven hospital to receive medical supplies from the government. This preference is stronger among those that find the inequality personally relevant.
While emphasising the inequality threat to promote AI adoption has its pros, some may argue that the cons are greater. For example, doing so undermines the doctor’s authority, and could raise tensions among different groups.
Still, while the move will lead to tension in the short term, it will benefit the disadvantaged in the long term.
Human bias exists, and taking action to reduce this is better than wishing it never existed.
In seeking ways to improve AI adoption, be it in the healthcare or other industries, highlighting the threat of inequality seems like an unorthodox method. But people’s psychology and rationale in accepting new technologies are never rational. Sometimes, an untrodden path may indeed lead to a new destination.