Artificial intelligence (AI) has already made waves in the recruitment and hiring arenas. Often, AI-based tools can help assess a candidate’s capabilities and even identify talent through sources other than internal recruitment systems. Many of them tout their ability to assist companies with finding great hires faster, frequently by screening out those who don’t seem to fit the bill.
Often, data-based hiring approaches seem like an ideal option. However, there are also a variety of legal and ethical concerns about relying on AI to this degree. Additionally, questions about the accuracy of these solutions have been raised, and privacy implications also need to be weighed.
If you currently (or are considering) using AI for hiring, here are some points you need to consider.
Technological Innovation vs. Scientific Validity
Many view technology as being related to science. While the creation of various tech solutions certainly connects to scientific approaches and skills, that doesn’t mean the touted results have been proven scientifically.
The majority of AI-based hiring tools are fairly new, so their capabilities have not been studied in depth. Additionally, it isn’t always clear how they make hiring decisions, including what the solutions are assessing and how assumptions are tested for validity.
Issues of Discrimination
In the US, there are multiple demographic classifications that are protected from discrimination. This includes being a member of a particular gender or race, as well as being an individual with a disability.
While an AI may not be explicitly programmed to eliminate someone based on protected criteria, making hiring decisions based solely on the system’s recommendations could leave a company vulnerable. If the AI is acting in a biased manner, even if the business isn’t aware that is occurring, the organization is liable.
For example, some AI hiring tools can analyze a candidate’s voice during an interview. While favoring individuals who sound friendly may not be overtly discriminatory, penalizing someone for having a stutter – which may be related to a disability – could come with consequences.
AI hiring systems have the ability to make determinations about a candidate based on any data presented and factored into the calculations. However, while job seekers do understand their resume and certain other materials or interactions will be judged by the hiring manager, many may be unaware they are also being screened by AI.
Going back to the previous example, if AI can analyze a candidate’s voice as part of the decision-making process, but the job seeker is not aware their voice is being reviewed by the system, that leads to some questions regarding the ethics of making a hiring decision based on that data. Similarly, since most people’s voices are determined by physiology, should it be a factor in the first place since it isn’t entirely under the candidate’s control?
Similarly, using AI to mine data from a candidate’s social media feed falls into questionable territory. Not only is it an issue of privacy (suggesting the job seeker has not consented to this search) but it is also potentially questionable from the perspective of ethics. The person’s public posts may not be an accurate reflection of how they are as a professional, so using that data for a hiring decision falls into a gray zone.
Analyzing social media “likes” to classify an individual may also be unethical and potentially illegal. First, there is no guarantee the results will be accurate. Second, it may reveal protected details about a job seeker, including their race, age, gender, or sexual orientation. Third, it remains a privacy issue, and companies may inadvertently discover information that should not be considered when hiring, like a person’s political affiliation or health status.
Ultimately, blindly following AI’s recommendation when hiring could put a company in legal trouble or paint the organization’s ethics in a bad light. If you do use these tools, a significant amount of oversight is necessary. Otherwise, the consequences could be severe.
To learn more, contact The Squires Group today.