Are we making AIs racist and sexist? Researchers warn machines are learning to have human biases | Daily Mail Online

When large data sets include social bias, machines learn that too, explains James Zou, Assistant Professor for Biomedical Data Science at Stanford University.

 

  • AI systems take as input large amounts of raw data and extract patterns 
  • When these large data sets include social bias, the machines learn that too
  •  Researchers asked AI to generate ‘He is to X as She is to Y’ analogies
  • While many results were common sense, others reflected stereotypes

Machine learning is ubiquitous in our daily lives.

Every time we talk to our smartphones, search for images or ask for restaurant recommendations, we are interacting with machine learning algorithms.

They take as input large amounts of raw data, like the entire text of an encyclopedia, or the entire archives of a newspaper, and analyze the information to extract patterns that might not be visible to human analysts.

But when these large data sets include social bias, the machines learn that too.

Scroll down for video 

If the source documents reflect gender bias – if they more often have the word 'doctor' near the word 'he' than near 'she,' and the word 'nurse' more commonly near 'she' than 'he' – then the algorithm learns those biases too, the researcher explains 

If the source documents reflect gender bias – if they more often have the word ‘doctor’ near the word ‘he’ than near ‘she,’ and the word ‘nurse’ more commonly near ‘she’ than ‘he’ – then the algorithm learns those biases too, the researcher explains

AI LEARNS TO BE SEXIST

According to James Zou, Assistant Professor for Biomedical Data Science at Stanford University, machine systems are learning human biases when examples of such are included in the training set.

This can include sexism, leading the AI to carry out gender biases.

The algorithm makes its decisions based on which words appear near each other frequently.

If the source documents reflect gender bias – if they more often have the word ‘doctor’ near the word ‘he’ than near ‘she,’ and the word ‘nurse’ more commonly near ‘she’ than ‘he’ – then the algorithm learns those biases too.

A machine learning algorithm is like a newborn baby that has been given millions of books to read without being taught the alphabet or knowing any words or grammar.

The power of this type of information processing is impressive, but there is a problem.

When it takes in the text data, a computer observes relationships between words based on various factors, including how often they are used together.

We can test how well the word relationships are identified by using analogy puzzles.

Suppose I ask the system to complete the analogy ‘He is to King as She is to X.’ If the system comes back with ‘Queen,’ then we would say it is successful, because it returns the same answer a human would.

Our research group trained the system on Google News articles, and then asked it to complete a different analogy: ‘Man is to Computer Programmer as Woman is to X.’

The answer came back: ‘Homemaker.’

Read more: http://www.dailymail.co.uk/sciencetech/article-3808834/Are-making-AIs-racist-sexist-Researchers-warn-machines-learning-human-biases.html#ixzz4LQchpIyU
Follow us: @MailOnline on Twitter | DailyMail on Facebook