Amazon's Hiring AI discriminated against women

  • I guess they actually used it? And that when they removed men/women from the AI it didn’t change much.

I’m not an AI programmer at a trillion dollar company.

But did they really not make it so that the AI doesn’t see a difference between man/woman like it wouldn’t see a difference between race? Like the AI would think those are the same terms?

Are they that incompetent or is there something they’re hiding?

No adblock/paywall article for those interested. Amazon’s algorithm had learned to select based on it’s previous well of experience with resumes (mostly from men). It then eventually started to select against resumes with the word “women” in them (ie: “women’s chess club” is an example stated in this article), because most of the resumes it had learned from were men’s.

Reminds me a little of a similar story that a CS professor of mine once shared.


Just goes to show that racial and sex differences are something you have to force people and machines not to recognize.


Was it only discriminating against cvs with words like “women” in it?

This looks like classic overfitting.

If you train it on a pool of one specific datatype, it’ll fit to that type, and reject/misinterpret anything outside of what it was trained on.

1 Like

Sounds like an honest mistake. I hope they don’t catch too much flak for this.

1 Like

That was one example given in the article. It also mentioned it was biased in favor of resumes with certain masculine turns of phrase, “executed” and “captured" being the references in the article. (Not sure why those are considered more ‘manly’, but ?)

I would say it’s definitely not malicious but a funny look into how having a small pool to base your system off can mess the whole thing up lol.


I dunno if a small pool is the problem, but that it was just analysing trends.

This is the problem with AI. They can really only extrapolate from existing data, and if you’re trying to achieve societal change, all AI will fight you until you turn them off.

1 Like

Yeah, it’s classic overtraining.

Way back in the day, and this definitely dates me, Dragon Dictate was really cool new software. You could talk to your computer! So anyway, being a little kid I obviously taught my new bro D-squared all the cuss words. At length. I had great fun! But it turned out I had overtrained it, so it was rendered useless for well, anything other than comedy.


Overfitting and training on bad data are two entirely different things. I’d like to think that people in large a company like amazon are smart enough not to overfit their models.

Microsoft breaks windows 10 with every update


Microsoft is special :stuck_out_tongue:

The article clearly states what’s happening: Most of the CVs were by males, so the training data was skewed. Thus -> bad training data.

We know nothing about how big the neural network is, i.e. whether it overfits the data.

1 Like

When you start to work in the enterprise, you realize that the enterprise is just as incompetent and stupid as everybody else. They just have more money.

I interviewed a guy from AWS for a technical manager position a couple of months ago, it turned out they used shell scripts to do internal monitoring and had zero automation. They didn’t even have freakin’ Nagios, they were stuck in 2002.

1 Like

I mean just look at the US government

They reported that men were still chosen over women even after they removed gender as a factor, why can’t people accept that women just make difference choices than men when it comes to careers?

What does that have to do with the hiring AI? Are you being intentionally sexist just to get a reaction?

1 Like

They tried, but we don’t know that they succeeded.

Amazon’s engineers tweaked the system to remedy these particular forms of bias, but couldn’t be sure the AI wouldn’t find new ways to unfairly discriminate against candidates.

They have likely removed the most obvious “feminine” words, but AI may find connections the creators never intended for it to learn.

If gender was successfully removed as a factor, then clearly that AI is not discriminating against women or anyone.

but as @pFtpr quoted it may not have been 100% successful. Probably would have had to start over?

The best thing for applications in the future is as much anonymity as possible.

Discrimination in this light is such a wrong term to use. It’s an AI designed by people. It will make mistakes and it’s no fault of Amazon. They were obviously looking for certain criteria that for whatever reason didn’t suit the overlook of women. It wasn’t selective based on gender or anything like that. People just like to make things out to be far worse than they actually are because they feel as though somebody is always to blame.

1 Like

Most IT candidates are white, asian, or indian males. So if you train your AI on a white/asian/indian/male data set, the corpus will be necessarily predisposed towards that. That’s what it expects to see, anything else is a negative.

This doesn’t mean women or black people are inferior candidates, it just means you don’t see as many of them due to various socioeconomic factors.

Obviously there are a multitude of ways to tweak that, and the best one would be to weigh training data heavily on a) candidates you decide to hire and b) candidates that are still working for you after 12 months with a positive yearly review. You don’t just do a bayesian thumbs up/down.

1 Like