Artificial Intelligence: pride and prejudice

Interview with Marco Basaldella, researcher @University of Cambridge

Marco Basaldella
Researcher @University of Cambridge

It's easier to break an atom than a prejudice, Einstein said.

Apparently, the rule also applies to Artificial Intelligence algorithms, as they develop prejudices. How is it possible that such a typically human attitude has taken root in artificial processes? How can we prove their objectivity?

Londa Schiebinger, science historian at Stanford University, in an article published in Nature bluntly said: Artificial Intelligence can be sexist and racist. A sad example of this is Google’s automatic categorization that in 2015 marked pictures with black people as "gorillas"; other algorithms, studied by researchers at the University of Virginia, when presented with photos of kitchens only perceived female presences and not the male ones, as if to indicate that a kitchen is the place where women should be.

ai_pregiudizio

To understand why human prejudice is amplified and perpetuated by Machine Learning algorithms, we need to have a clear understanding of the underlying mechanisms. There are two aspects to take into account, highlighted in a recent TedX talk by Marco Basaldella, a computer scientist and AI researcher:

  • Correlation is not causation

The correlation between two facts does not necessarily imply that one is the cause of the other. For example, if we look at the consumption of mozzarella and the number of people graduating in civil engineering in the USA, we can see that the two values follow the same trend, so they are apparently related.

But even if eating mozzarella does not make us better engineers, if we feed these data to an AI algorithm and ask it to predict how many people are going to graduate in engineering, that apparent cause-effect relationship will mislead the AI, producing wrong results.

Back to the kitchen example, the neural network was trained with images where only women appeared: that's how the correlation was triggered, if a person is in the kitchen they necessarily have to be a woman.

  • Garbage in, garbage out

garbagein_garbageout 

Feeding algorithms with wrong data, containing prejudices and therefore "garbage", will teach them wrong reasoning. Another example is the risk assessment algorithms used in some American courts, which establish a defendant’s risk of committing new crimes.

However, these algorithms seem to disadvantage black people: an independent investigation showed that the algorithm associated the repeatability of crime with skin color, regardless of the economic and social context the defendant came from, because that data had not been provided. For this reason, and as AI is now such an integral part of our lives, it is essential to study solutions that make algorithms more equitable.

Similar algorithms are also frequently found in business, especially in Content Management. Given the amount of content produced daily by companies, it is essential to classify them, in order to organize and make them searchable and reusable over time. Being able to do that automatically with AI’s help would result in great savings in terms of time and energy. But semantic recognition can make mistakes or inaccuracies. 

THRON implements AI engines with a "learning by doing" ability, so it can learn how to tag automatically and to add relevant metadata, even from a small set of content, with increasingly accurate results.

Let’s delve deeper into the subject with Marco Basaldella, an expert in this topic.

Q: How do we train Artificial Intelligence? How do you think the processes used to train algorithms can be improved?

A: Training a Machine Learning algorithm is like teaching a child: the decisions it makes will be as good as its experience is. And just like a child absorbs the prejudices of the environment they are raised in, the same goes for a Machine Learning algorithm - for example, the model that associates "cooking" to “women”, simply because it was never shown a man in a kitchen.

That's why it's important, in designing a Machine Learning algorithm, to involve both Artificial Intelligence and domain experts, who should be able to select the most suitable features for the environment the ML is going to work in. Likewise, once the model has been trained, it is useful to evaluate with the domain expert not only what the model has learned, but also why it makes certain choices.

For example, risk assessment algorithms associated ethnicity with the risk of committing crimes because they were not informed about the social and economic conditions of the defendants, despite the fact that it is (fortunately) now universally accepted that it is the latter, and not skin color, that pushes people towards crime. Therefore, the algorithm skipped a step: it is true that in the USA there are, proportionally, more black than white people in prison, but this is because in America, despite the recent decades progresses, white people still have access to better schools, better jobs, better credit opportunities, and so on. The algorithm, however, could not know all of this because it had not been told, so it formed the "prejudice".

Some might argue that, if algorithms absorb our prejudices, they are no better than us. As said, however, prejudices are avoidable through a proper selection of the data the algorithm is built on, a process called feature engineering. And at least, machines are still impervious to emotions: it has been shown that judges give more severe penalties after their favorite team has lost, and in this - for now! - Artificial Intelligence can’t imitate us yet.