Facebook Pixel Code

Many controversies have sprung up in the the world of AI recently, many of which involve the problem of bias or just poor ethical judgment in a machine learning agent. Examples include algorithms that screen potential home renters or buyers having a bias against people of color. 

This is a tricky issue, because, in most cases, the blame cannot be wholly placed on the user of the AI tool in question, nor entirely on the developer, nor just the machine learning agent. It could entirely be the case that none of the three have any intention to use AI to discriminate against marginalized groups (especially the machine, who cannot really “want” anything besides what it is programmed to want), yet it still happens despite any possible lack of intent from any party. The problem exists, and therefore must be dealt with, and the answer likely involves a collaboration between the three named parties. 

How is the Developer Responsible?

In courts of law, the question of nature vs. nurture can spring up sometimes, or at least arguments that are related to this concept. Many defense lawyers rely on appeals to a defendant’s troubled upbringing or environment to explain the poor social influences that partially explain the bad actions of the individual. 

Take an AI agent dedicated to determining bank loan applicants was found to be exhibiting racial bias in its screenings. If the machine were, improbably, to be put on trial, the question of “nurture” would absolutely be brought up by the defense, putting the developers, who nurtured  the agent, on the stand. 

The defense would ask the developers about the data that they trained the agent on, and whether it included sufficient examples of non-white applicants in its data set. Before the prosecution asks what a “sufficient” amount of non-white applicant examples would consist of, the defense could specify that a “sufficient” amount should at the very least reflect the percentages of the population (of the city, country, continent, etc. that the loan tool will be concerned with) that comprise different racial groups. 

The danger of an inaccurate reflection, the defense would argue, is that the machine would become “overfitted” to whatever groups are represented in the data set, meaning that it would be more inclined to give a loan to, say, a white person, if white people comprise a disproportionately high percentage of the data set. 

Another pressing question would be whether the developers programmed the agent to consider that applicants belonging to certain marginalized groups are less likely to be as economically advantaged as other privileged groups that may superficially appear as more financially responsible due to larger incomes. 

Humans may know that the purpose of putting race on a loan application is for the purpose of anti-discrimination, but to machine learning agents it is just another bit of data, and does not understand the larger social implications of a person’s race. Which is why the users of the AI tool need to be responsible for checking the tool’s results. 

How is the User Responsible?

Many of the businesses and organizations that use an AI tool, such as an automated loan screening tool, get the AI tool from an outside source, meaning that they do not design and implement the tool itself. It is often the case that the user is not an expert in AI, which in a way is inconsequential because expertise is not required to implement and enjoy the benefits of an AI tool. All that is required of many tools for the non-expert is a simple plug-it-in-and-get-results installation. 

Even then, there needs to be stressed responsibility that the results are paid attention to, even if the user does not wholly understand the machinations of the tool, nor know the details of its training that could have possibly contributed to bias, the user can still take action to detect bias. 

An emerging practice in AI is the inclusion of an outside expert, who can be hired by the business/paid for by the developers, who can analyze the results of an AI agent to check for bias. 

Scrutiny should not be limited to just racial and gender bias, either, because it has been discovered that AI agents have declined applications on things like the type of Word Processor a resume was written on. 

So, for the user, conducting “AI audits” ought to be a requirement if you care about cutting down on the many forms of discrimination that an AI agent can exhibit. 

Can the Machine be Blamed?

The machine is likely the least culpable of the three, because, as hinted before, it does not “feel” anything, so an AI agent would never discriminate based on emotions like hostility or animosity, but rather simply the analysis of data, which can be devoid of important facts surrounding the larger context of data. In the case of economic data relating to Black loan applicants, the machine learning agent does not really have a concept of systemic racism in America, and so will only notice the trend that Black applicants tend to have lower incomes compared to white applicants. 

So, to reiterate, the responsibility rests the most on the developers and the users, who do understand the larger context to which most data ought to be read, and should program or respond to the agent accordingly.