Home » Tech » AI in Recruiting: Avoiding Discrimination & Legal Pitfalls | t3n

AI in Recruiting: Avoiding Discrimination & Legal Pitfalls | t3n

by Lisa Park - Tech Editor

Artificial intelligence is increasingly being touted as a solution to streamline recruitment processes, but its implementation is fraught with challenges. A recent discussion with economics professor Claudia Bünte highlighted the potential benefits of AI in sourcing, screening, interviewing, and hiring, but also underscored the significant pitfalls that can arise from a lack of technological understanding, appropriate tools, and awareness of legal frameworks.

The core issue lies in the potential for algorithmic discrimination. Bünte cited a study demonstrating systematic disadvantage for women. The research revealed that AI tools consistently rated identical qualifications lower when presented as belonging to female candidates. This bias stems from the training data used to develop these AI systems.

According to Bünte, AI learns from historical data. If that data disproportionately represents one demographic – in this case, men – the algorithm may incorrectly conclude that this demographic is inherently more suitable for a given role. This isn’t intentional malice, but a statistical artifact of biased input.

The U.S. Department of Labor has recognized this risk, issuing an “AI & Inclusive Hiring Framework” in October 2024 to help organizations navigate these challenges, particularly concerning people with disabilities. The framework, funded by the DOL’s Office of Disability Employment Policy, focuses on ten key areas addressing five overarching themes. As noted in a client alert from Buchalter, approximately 80% of U.S. Companies and nearly all Fortune 500 firms are already utilizing AI-powered hiring software, making this guidance particularly timely.

AI is being used in various stages of the hiring process. It can assist in crafting job descriptions, screening resumes, and even conducting initial interviews through chatbots and automated assessments. However, the reliance on keyword matching and automated resume screening can inadvertently exclude qualified candidates who don’t perfectly fit the pre-defined criteria. This is a concern echoed by Hilke Schellmann, author of “The Algorithm,” who believes the greatest risk isn’t job displacement, but the prevention of qualified individuals from even being considered for a role.

The BBC reported in February 2024 on the growing number of highly qualified candidates being filtered out by AI hiring tools, highlighting concerns that the software may be excising the best candidates. These tools often employ body-language analysis, vocal assessments, and gamified tests, raising questions about their validity and fairness.

Legal frameworks are emerging to address these concerns. Both the European Union’s General Data Protection Regulation (GDPR) and the forthcoming AI Act contain provisions relevant to the use of AI in recruitment. These regulations emphasize the need for transparency, accountability, and fairness in algorithmic decision-making.

Companies need to be aware of both the technical limitations of AI and the legal requirements governing its use. A thorough understanding of these factors is crucial for responsible implementation. Bünte suggests a simple test to quickly assess whether an AI tool is appropriate for a given task, emphasizing the importance of ongoing evaluation and monitoring.

The rise of AI in recruitment presents both opportunities and risks. While AI can potentially enhance efficiency and reduce transactional work, as noted in a study on algorithmic discrimination, it also carries the potential to perpetuate and even amplify existing biases. Careful consideration, informed implementation, and ongoing oversight are essential to ensure that AI is used to create a more inclusive and equitable hiring process.

The American Bar Association highlights that employers’ use of AI tools is subject to federal laws prohibiting employment discrimination, as well as emerging state and local laws specific to AI. Navigating this “maze” requires a proactive approach to legal compliance and a commitment to ethical AI practices.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.