Wednesday 25 March 2020

AI Bias: Speech recognition technology is 'racist'

Voice recognition tech makes more errors with African American voices

By Bernard Thompson

Speech recognition technologies are rife with racial biases, according to a new study by Stanford University.

The results, published in the journal, Proceedings of the National Academy of Sciences, showed that, on average, systems developed by Amazon, Apple, Google, IBM and Microsoft misunderstood 35% of the words spoken by African Americans compared to 19% of those spoken by white Americans.

(Scottish people like your humble writer already understand something about that.)

The Stanford tests were carried out between May and June 2019, using the same words, with participants of multiple ages and genders.

The researchers tested each of the five company’s technology with more than 2,000 speech samples from recorded interviews with white Americans and African Americans.

(It should be noted that these biases are not necessarily found in popular products such as Alexa and Siri, as this information has not been revealed.)

The error rates were highest for African American males – particularly when using vernacular.

Sharad Goel, a Stanford assistant professor of computational engineering, who oversaw the research, believes the findings show the need for independent audits of new tech: “We can’t count on companies to regulate themselves.”

Meanwhile, Ravi Shroff, a New York University professor of statistics, who explores bias and discrimination in new technologies, commented: “I don’t understand why there is not more due diligence from these companies before these technologies are released. I don’t understand why we keep seeing these problems.”

The problem appears to be an old one – that data sets used in software and Artificial Intelligence tools development are typically selected by a very specific demographic group, namely young, white males in their 20s and 30s.

As far back as 2016, Joy Buolamwini, a Ghanaian-American computer scientist and digital activist based at the MIT Media Lab, presented a popular TED Talk on the issue of what she terms as "the coded gaze" or algorithmic bias.

Ms Buolamwini founded the Algorithmic Justice League, an organisation that seeks to challenge bias in decision-making software.

In her TED Talk, she noted: “Across the US, police departments are starting to use facial recognition software in their crime-fighting arsenal. Georgetown Law published a report showing that one in two adults in the US – that's 117 million people – have their faces in facial recognition networks. Police departments can currently look at these networks unregulated, using algorithms that have not been audited for accuracy.”

Both Ms Buolamwini's findings and the latest Stanford study raise profound issues as AI enters more and more areas of our lives, from airport facial scanners to banking security.

For example, HSBC uses a voice recognition step requiring customers to say, “My voice is my password”, when making telephone inquiries.

It is not difficult to see the considerable inconvenience of being unable to access banking facilities or being delayed at airports, due to nothing other than the ethnicity of the individual concerned.

But, as Ms Buolamwini points out, these inherent biases could have more serious implications, should they extend to evaluating people as more or less likely to exhibit criminal behaviour or pose other theoretical risks.

One developer for a major international company (who did not want to be identified) explained the situation from his perspective: “By now, we should all know about these issues but, in reality, there is never enough time for testing so factoring in diversity just doesn't happen as it should.”

Seemingly reinforcing Professor Goel's call for regulation, the developer went on: “Typically, management are always pushing to get the products to market to start making money as soon as possible, and that's why you can expect these problems to continue.”

The very demographic factors that lead to these biases – the relative lack of diversity in Big Tech – may prove to be obstacles in finding companies, willing to invest in addressing the issues.

Creating more diverse data sets and introducing more rigorous testing with a specific focus on reducing bias requires the will from senior management to spend more money, pre-market, and accept delays that may benefit less scrupulous competitors.

But perhaps these issues will soon come back to bite the very people currently causing the problem.

With huge growth in software and AI development in Asia, maybe soon it will be white users who will be complaining that the devices impacting on their lives don't recognise their faces and voices.

And with increasingly sinister uses – such as China's use of facial-recognition as part of its policy of awarding “social points” – quite soon these applications may be making decisions on our very worth to society as human beings.

If the white males still wielding power in Silicon Valley are really smart, they will get serious about bias while they can.



No comments:

Post a Comment

Note: only a member of this blog may post a comment.