Facebook Pixel Code

This is the twenty-third article in a series dedicated to the various aspects of machine learning (ML). Today’s article will continue the discussion of bias in Machine Learning agents by focusing on the bad kind of bias, where an agent will make decisions that privilege one race, gender, class, etc. over another. This article will aim to show why it is imperative that AI developers are sensitive to the possibilities of these biases in AI agents, and work to prevent them. 

Our last article covered a kind of inductive bias in machine learning that is concerned with the methods of learning that an agent chooses. We showed that an agent’s bias towards shorter decision trees can end up making the agent perform its tasks in much less costly ways, which can be beneficial in the long run. 

This article, however, will tackle an ethical issue in machine learning, which is the existence of more problematic biases that can pop up in a machine learning agent’s reasoning or performance, such as racial bias. 

If you’ve been keeping up with this this series so far, you’ll know that AI agents are not inherently moral or ethical beings—they are rational, but they do not possess the emotive faculty that is (arguably) responsible for inspiring us to use our rational powers to formulate moral and ethical assertions. 

Some may think that morality is a coldly rational process, and that moral principles can be arrived at through rational considerations alone, but the existence of emotionless robots that, in a game of poker, will reason that it is okay to throw Red Bull in the eye of its competitors. It doesn’t think about how that would make the other human player feel, only that it would make it easier to “tilt” the other player into making a bad move, and thus make it easier to win money. 

It requires human intervention in the development and training of AI agents to ensure that an agent associates such behavior with decreased utility. You can’t really explain to a robot why it is ethically wrong to throw Red Bull at people to increase its chances of making money, because, again, it lacks empathy, but you can prevent it from doing such behavior by negotiating with it on its own terms. An AI agent’s goal is to maximize utility, so when its goal is to make the most money possible by playing poker, it will, in theory, do anything in order to get a high utility result. By telling it that unethical liquid-flinging moves lowers, rather than increases, utility, then it will refrain from such behavior, and behave more ethically as a result. 

Things get trickier when it comes to curbing problematic biases in machine learning agents. One of the biggest AI controversies in recent memory was the discovery that some facial recognition softwares were more likely to recognize white faces, rather than nonwhite faces. Like we said, AI agents do not think around ethical or unethical lines, so this occurrence is not the results of the software being racist because of feelings surrounding certain races. Rather, it is the result of oversights made during the training process. 

In the case of racial bias in facial recognition, the main culprit would most likely be the data used in the training set. The software may have worked with a data set that was composed of a majority of white faces, and so when it encountered nonwhite faces after training, it was less skilled at recognizing such faces.

If you’ve been paying attention through this machine learning series, you may recall the concept of “overfitting,” where a machine learning agent becomes far too used to its training data set to effectively analyze previously unseen data encountered in the real world. So, racial bias is a form of overfitting, though a bit more complicated. A facial recognition software may be adept at analyzing white faces that were not seen in the training set, so it is not overfit in that sense, but the lack of diversity within the set makes it overfit for a particular race. 

The solution, then, is to have AI developers become more conscious of the training data used in during development, and realize that the AI agent may be performing tasks for a more diverse crowd. A good example of this would be for conversational computing agents, like the “How may I help you?” chatbots that you see, to be multilingual, as not every American that visits a website will be conversant in English. This way, biases against immigrants or populations with limited English speaking ability can be alleviated by giving them the same level of service that a native speaker would have. 

Summary

Problematic biases typically do not arise in AI agents through their own reasoning, but rather are a result of oversights in the training and development stage. In the instance of a facial recognition software that is adept at recognizing white faces but bad at recognizing nonwhite faces, the main culprit would be a lack of diversity in the data set that the AI agent was trained on. The way to curb racial bias in such an agent is to have it train on a diverse array of faces, rather than just a majority white data set.