Artificial intelligence (AI) has become a significant player in the recruitment process. However, Recruiter Daily reported, “…a recent survey conducted by the Harris Poll on behalf of the American Staffing Association reveals a concerning trend: a significant minority of job seekers believe that AI recruiting tools are more biased than human recruiters.”
According to Recruiter Daily, the survey found that, “…43% of job seekers actively seeking new roles feel that AI tools exhibit more bias compared to human recruiters. This distrust is not unfounded. They article went further to highlight, what Richard Wahlquist, CEO of the American Staffing Association noted, “Job seekers may feel comfortable using artificial intelligence tools in their job search, but that does not equate to trusting AI to make fair hiring decisions.”
The potential for AI to introduce bias into the hiring process has been a topic of concern for some time. This has led to calls for increased transparency and accountability in the deployment of AI in hiring.
Adding to this discussion, a report from MIT Technology Review highlights how LinkedIn discovered bias in its job-matching algorithms. The company found that its AI was recommending more men than women for open roles, simply because men were more likely to apply for positions or respond to recruiters. MIT Technology Review highlighted that a team at LinkedIn, “…built a new AI designed to produce more representative results and deployed it in 2018. It was essentially a separate algorithm designed to counteract recommendations skewed toward a particular group. The new AI ensures that before referring the matches curated by the original engine, the recommendation system includes a representative distribution of users across gender.”
The distrust in AI recruiting tools is further supported by another survey by the American Staffing Association, the American Staffing Association Workforce Monitor conducted by The Harris Poll, found that, “Nearly half of employed U.S. job seekers (49%) believe artificial intelligence (AI) tools used in job recruiting are more biased than their human counterparts…” This sentiment underscores the need for continuous efforts to ensure fairness and reduce bias in AI systems.
As AI continues to play a crucial role in recruitment, organizations can take several proactive steps to reduce AI bias and ensure their AI systems are fair and equitable. BritishCouncil.org offers a few key strategies including:
- Diverse and Representative Data: Ensure that the training data used for AI models is diverse and representative of all groups.
- Regular Audits and Testing: Conduct regular audits and testing of AI systems to identify and address biases.
- Transparency and Explainability: Make AI systems transparent and their decision-making processes explainable.
- Bias Mitigation Techniques: Implement bias mitigation techniques such as re-weighting, re-sampling, or using fairness constraints during the model training process.
- Ethical Guidelines and Governance: Establish clear ethical guidelines and governance frameworks for AI development and deployment.
- Continuous Monitoring and Feedback: Continuously monitor AI systems in real-world applications and gather feedback from users to identify and address any emerging biases.
- Education and Training: Provide education and training for employees on the importance of AI ethics and bias.
While AI has the potential to revolutionize the hiring process, it is essential to address the biases that these tools can introduce. By doing so, we can work towards a more equitable and fair recruitment landscape.