« “In effect, Amazon’s system taught itself that male candidates were preferable
« “In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word ‘women’s,’ as in ‘women’s chess club captain.’ And it downgraded graduates of two all-women’s colleges,” Reuters reported. The program also decided that basic tech skills, like the ability to write code, which popped up on all sorts of resumes, weren’t all that important, but grew to like candidates who littered their resumes with macho verbs such as “executed” and “captured.”
After years of trying to fix the project, Amazon brass reportedly “lost hope“ and shuttered the effort in 2017.
All of this is a remarkably clear-cut illustration of why many tech experts are worried that, rather than remove human biases from important decisions, artificial intelligence will simply automate them. An investigation by ProPublica, for instance, found that algorithms judges use in criminal sentencing may dole out harsher penalties to black defendants than white ones. Google Translate famously introduced gender biases into its translations. The issue is that these programs learn to spot patterns and make decisions by analyzing massive data sets, which themselves are often a reflection of social discrimination. Programmers can try to tweak the A.I. to avoid those undesirable results, but they may not think to, or be successful even if they try. »