Is your software racist?

Source: 
Author: 
Coverage Type: 

Google’s Translate tool “learns” language from an existing corpus of writing, and the writing often includes cultural patterns regarding how men and women are described. Because the model is trained on data that already has biases of its own, the results that it spits out serve only to further replicate and even amplify them. It might seem strange that a seemingly objective piece of software would yield gender-biased results, but the problem is an increasing concern in the technology world. The term is “algorithmic bias” -- the idea that artificially intelligent software, the stuff we count on to do everything from power our Netflix recommendations to determine our qualifications for a loan, often turns out to perpetuate social bias.

Algorithmic transparency may not be the only way regulators could offer greater insight into biased systems. Another way would be scrutinizing the data that an algorithm learns from. But as Artificial Intelligence (AI) grows in importance, some believe the scope of the problems the industry faces will extend beyond the ability of legacy agencies to keep up. Some prominent voices, including Elon Musk, a founder of research group OpenAI, and former Department of Justice attorney-adviser Andrew Tutt, have proposed going so far as to establish a standalone federal agency that would oversee the development of AI, much like the Federal Communications Commission and Food and Drug Administration do for their respective industries.


Is your software racist?