We can tackle bias in AIs without making them less intelligent

Making an artificial intelligence less biased makes it less accurate, according to conventional wisdom, but that may not be true. A new way of testing AIs could help us build algorithms that are both fairer and more effective.

New techniques can make AIs fairer, such as by preprocessing training data to remove bias, but in practice these lead to less precise results.

Or do they? “The trade-off that we see is kind of an illusion,” says Sanghamitra Dutta at Carnegie Mellon University, Pennsylvania.

For example, a firm may employ more men as its predominantly male management has hired fewer women due to unconscious bias. If that company uses its employment data to train an AI to assess job applicants and hire staff, the dearth of information on women makes it harder for the system to judge their aptitude, putting them at a disadvantage.

The company could use existing fairer training techniques to create a new AI, but if it is tested on the original, biased data it will appear to be less accurate than the original AI, says Dutta.

That doesn’t mean the fairer AI is no good though, says Dutta. Biased hiring practices have made the firm’s data unrepresentative of the entire pool of job candidates. Instead, AIs should be tested using an ideal data set, says Dutta. When you do this, the trade-off between accuracy and fairness disappears.

Dutta, who carried out the work with colleagues while at IBM, has developed a way to create this ideal data set. The technique draws on a field of mathematics called information theory to equalize the amount of information on each group, providing a statistical guarantee of fairness. In the case of the hiring company, that might mean using the existing data to invent some fictional women to balance the amount of information on each group, though Dutta says the approach works with multiple categories and more complex data than just numbers of employees.

The approach can help evaluate AI, says Dutta. If two AIs perform similarly on biased data, but one performs better on the ideal data set, it has greater potential for both fairness and accuracy. If fair algorithms perform much better when using ideal data sets, this could also alert companies to serious bias in their data.

Fairness tests are important, says Sandra Wachter at the University of Oxford, but she cautions that they only reveal the problem of bias in society. “That’s the first step, but the actual hard work is how are we going to fix that problem.” To do so, computer scientists can’t rely on automated fixes and will need to engage more with social scientists, she says.

from New Scientist Jul 18, 2020

小井

继续阅读此作者的更多文章