Employees work at the Mistral AI headquarters, an artificial intelligence start-up, in Paris
AFP

Australian employers are increasingly using AI systems to screen and shortlist job candidates. However, new research shows that these technologies could lead to discrimination.

AI tools, such as CV scanners and vocal assessments, are designed to save time and money by sorting, ranking, and scoring applicants. This means a computer program might already be deciding whether a jobseeker's application is accepted or rejected, even before the person meets with a human interviewer, ABC AU reported.

New research by Natalie Sheard, a lawyer and postdoctoral fellow at the University of Melbourne, has found that AI hiring systems could worsen discrimination. The study shows that these systems might "enable, reinforce, and amplify discrimination" against groups that have been historically marginalized.

"There are some serious risks that are created by the way in which these systems are used by employers, so risks for already disadvantaged groups in the labour market — women, jobseekers with disability or [from] non-English-speaking backgrounds, older candidates," she told ABC Radio National's Law Report.

Around 62% of Australian organizations used AI in their recruitment processes last year, according to the Responsible AI Index. However, Australia currently has no specific laws to regulate how these tools work or how companies use them.

AI was expected to reduce bias in hiring, but several high-profile cases in the U.S. have shown that it may actually increase bias. One example is an AI system developed by Amazon, which began downgrading applications from jobseekers who used the word "women's" in their CVs.

Sheard interviewed 23 people for her research on AI hiring systems, including recruiters from various industries, career coaches, an AI expert, and two employees from a major AI developer.

Her research focused on three recruitment areas: CVs, candidate assessments (such as psychological tests), and video interviews. In robo interviews, candidates record answers to questions, which AI then evaluates.

She noted that some AI tools have used controversial methods like facial analysis to assess these interviews. She explained that AI hiring tools can develop biases because they learn from the data they are given.

If certain groups are underrepresented in the data, the AI may not reflect the broader population. Biases in the training data can also cause AI to repeat discriminatory practices.

For example, because only 15% of women contribute to Wikipedia, AI models may carry a male perspective into recruitment. A case of this occurred with Amazon's AI hiring model, which learned gender bias from a decade of software developer job applications.

"I think there's absolutely a need to regulate these AI screening systems," Dr Sheard said.

She added, "Some groups have called for a complete ban on these systems, and I think there is a lot of merit to that argument, particularly while we're in a situation where we don't have proper legal safeguards in place, and where we don't really understand the impacts of these systems on already marginalised groups in the labour market."

In Australia, while no legal action has been taken yet, the Merit Protection Commissioner has issued guidance for public sector employers using AI hiring systems.