Over the past few months, there have been extensive protests over racial inequality, mainly in the United States but also sparking up in various parts of the world. Taking notice of these protests, various tech companies have decided to speak up. For example, IBM announced that it would ax its facial recognition programs so it can work on promoting racial equity in law enforcement. Amazon has disallowed police use of its facial recognition software until stronger regulations governing the ethical use of this type of technology have been implemented.
Unfortunately, regulatory changes may not be enough. What’s needed now is for the AI industry to apply its science to society as a whole, not just part of it.
The fact that IBM and Amazon have stepped up and decided to take on racial bias in AI is cause for hope. AI is a big part of our lives, and it’s growing exponentially
AI as a Subset vs. AI as Its Own Discipline
As it stands now, many AI algorithms are biased against people of color. How has this happened? The reason is due to the narrowmindedness that has thus forth locked AI into a subset of computer science (CS) and computer engineering (CE) — as opposed to allowing it to stand as its own academic discipline.
One of the things lacking in AI in its current form is that it doesn’t always take the elaborate makeup of human behavior into consideration. As its own discipline, researchers can address this issue. While created in a lab, AI must be something that functions in the real world. In the CS lab, lots of data is gathered but the data lacks content and understanding of complexity. This creates AI that has inherent biases. AI that is studied as its own discipline can take into account the complexities of human behavior and give context to the data that is being collected.
Several AI algorithms that have been used and that are in use today show bias against women and people of color. For example, in 2014 Amazon discovered that an AI algorithm it had developed for headhunting was actually biased against female candidates. In early 2019, MIT researchers found that facial recognition software was less accurate when it comes to people of color. Another 2019 study by the National Institute of Standards and Technology found traces of racial bias in nearly 200 facial recognition algorithms.
The fact that IBM and Amazon have stepped up and decided to take on racial bias in AI is cause for hope. AI is a big part of our lives, and it’s growing exponentially. Between 2015 and 2019, the global use of AI increased by 270%. As the use of AI continues to grow, it behooves developers to take into account not only CS and CE but also non-software disciplines such as social science, politics, and law.
Currently, AI and algorithms at North Carolina State University are taught under the umbrella of the CS program. MIT offers AI in its CS and CE programs. Of course, it’s understandable how AI got its start there, but as it continues to be developed and used by people on a daily basis, the natural progression is for it to enter humanities tracks, including race, gender studies, business, political science, and others. Georgetown University is one example of a step in the right direction — it includes AI and ML (machine learning) ideas in its Security Studies curriculum.
The Noble Goal of Technology
If the study of AI is not broadened, it will continue to develop just fine at a technological level. However, what will also continue are the racial biases that certain algorithms perpetuate. These will continue unless AI is studied under the broader scope of the many other areas it affects, specifically, the social contexts within which it is used. To be sure, it’s a noble goal for technology, but not one that is beyond its capabilities.
In CE, students study computer and programming basics. In CS, they study computational and programmatic theory. These are strong foundations for the discipline of AI, but they should not be considered the only things relevant. When these foundations are looked at as the only components or necessary knowledge to develop algorithms, the same racial bias will continue. If they are looked at as essential components that need to be built upon, then developers have a chance at creating technology that will benefit society on a larger level.
At the same time that computer scientists and engineers must allow AI to extend beyond the lab, those who work in the humanities, including psychologists, sociologists, anthropologists, and others, must be willing to step in and lend their knowledge and expertise to this worthy cause. The responsible management of AI and ML is no longer just a nice idea, but a necessary one. If we replicate human biases into our technology, what’s the point? Expanding AI beyond the fields of CS and CE is crucial if we want AI to benefit all parts of society, not just one.