Sexist AI Recruiting
Employee staffing and recruiting is serious business. According to this source, staffing and recruitment in the United States was valued over $150 billion in 2019. Many companies (especially organizations like Facebook and Google) spend millions scouring the country for talented professionals and interviewing them opportunities. Given the amount of time, energy, and stress involved, it makes sense that some firms would want to automate some (or all) of the recruitment process.
Many believe that AI can be a super effective means to automatically identifying qualified candidates from the sea of resumes that often hit a company's applicant tracking system. Some may also believe that unlike a human who is often fallible and have biases that can cause them to reject an otherwise qualified candidate, an AI has no such biases and can evaluate a candidate with lightning quick, analytical precision.
The actual truth is that AI, if not properly designed and managed, can magnify the biases of its creators, often in ways that were unintended.
In late 2018, Amazon terminated an AI product they've been using internally to automatically vet candidates for a number of roles. The reason -- their AI models had a strong preference towards female candidates.
What went wrong?
AI is neither sexist nor does it have any particular opinion about people or society. It merely finds patterns in data that best correlate to the target specified by the creator.
There's been a number of articles already written on the subject, a lot of which rightfully point out that the data used to develop the tool was resumes submitted over a 10 year period, a majority of which were submitted by males.
Biggest takeaway for in this story is to avoid solely relying on AI to make business decisions.