Responsible Use of AI to Help Fight COVID-19

While COVID-19 has proven fatal for hundreds of thousands of people all over the world, it seems to have had a disproportionate effect on people of color and communities of lower-income African Americans. Many argue that the reason is not biological but sociological, as these groups could have more difficulties accessing healthcare, less job security, and little chance of working from home. In such cases, particularly for people who don’t own cars and must use public transportation, self-isolating becomes harder as stockpiling food is not an option they are forced to buy as necessary. Such factors can result in a higher risk for COVID-19 than the rest of the population.

While the need to act now is driving forward great technological advances, ignoring responsible AI practices can do more harm than good in the long run, especially for disadvantaged communities.

Artificial intelligence (AI), however, has shown promise in the attempt to combat problems that have arisen due to COVID-19, from supply-chain management to early-stage vaccine research, and more. AI can be used in so many different ways because of its ability to gather and analyze huge amounts of data for different purposes. However, AI also contains inherent biases. AI collects data from humans, who carry with them societal imperfections and prejudices—the same ones that may have given rise to a disproportionate health impact in the first place. If we are to use AI to help fight COVID-19, we must make sure that the data collected is not biased against underprivileged communities.

4 Ways AI Can Be Used Responsibly to Fight COVID-19

1. Do not rely solely on algorithms
Yes, AI and Machine Learning (ML) are based on algorithms, but when an algorithm is used in a social vacuum without looking at any background aspects, the information you receive will be biased. For example, AI systems can be used to determine which hospitals should get ventilators and other equipment, and how much. However, disadvantaged populations tend to have higher comorbidities, and if that factor isn’t plugged into the algorithm, the decisions rendered won’t reflect the reality of who needs what. The result can be that hospitals in lower-income neighborhoods won’t have the equipment they need. Once social factors become more of a consideration in an algorithm, results are likely to be more accurate.

2. Avoid singling out specific communities
AI systems can use mobility data to detect people who are violating stay-at-home orders and subsequently send more police enforcement to those communities. Since lower-income families don’t have the means to buy in bulk, they are forced to go outside more than others. Responsible AI should not deal with such people by sending more police enforcement, rather it would do best to identify the root causes of why they didn’t stay at home, thus allowing for more food and resources to be made available to them.

3. Remember that AI should be human-centric
Some portions of the population have exhibited lower rates of smartphone ownership than others. If AI uses mobile apps to track infections, that means that the technology will not reflect the complete picture. AI must therefore take this into account to avoid shortchanging those people.

4. Data should come from all populations
Various AI systems help doctors to make quick decisions regarding patients’ treatment and hospitalization. For example, an existing system from the University of Chicago Medical Center is now being upgraded to help doctors deal with COVID-19. It analyzes over 100 variables to predict whether a patient will need to be intubated within eight hours. The potential that such a system can have is wondrous, but it depends on which portions of the population have been used as samples and data sets. If such portions are too homogeneous, will the best healthcare also be provided to all portions of the population?

Using AI Responsibly

The risks in the US of AI to fight COVID-19 and other ailments are in the data itself. The question must always be asked as to whether the data is biased. If it is, it must be corrected, since biased data will lead to biased solutions.

Companies, governments, and researchers that use AI must make sure to do so responsibly. Here are four questions that policymakers should consider before implementing an AI solution to combat the virus.

  1. What are the consequences if the system makes a mistake?
  2. Can the methodology behind the AI be explained to the public in a clear way?
  3. What are potential sources of bias?
  4. How can individual privacy be protected?

By carefully considering these questions before deploying AI solutions in healthcare, among other fields, we can hope to deliver results that help every population. While the need to act now is driving forward great technological advances, ignoring responsible AI practices can do more harm than good in the long run, especially for disadvantaged communities.