AI in Talent Management
This article was originally published by TalentGuard CTO, Frank Ginac, on Medium.com
“AI is all about figuring out what to do when you don’t know what to do.”
— Peter Norvig
“It is usually agreed that a system capable of learning deserves to be called intelligent, and conversely, a system is considered as intelligent is, among other things, usually expected to be able to learn. Learning always has to do with the self-improvement of future behavior based on experience.”
— Sandip Sen and Gerhard Weiss
The application of Artificial Intelligence (AI) in Talent Management is obscured by marketing hype and misinformation. Much of what is called AI in the marketplace is something much less than what researchers and practitioners in the field consider “intelligent.” That doesn’t mean solutions that bear the moniker aren’t useful. Still, they are misleading to HR technology buyers who believe that AI is a panacea for many of their most challenging talent management problems.
However, there are many promising applications of AI in Talent Management that are worth investigating further. For example, finding the best match from thousands of applicants for a handful of job openings or providing insights about an applicant’s personality by analyzing their facial expressions during video interviews shows great promise. They automate labor-intensive, error-prone, and routine tasks (like resume matching) and reduce risk by ensuring candidates fit a certain psychological profile. They learn and get better at their tasks over time and without human intervention (at least, with very little human intervention). This learning and adapting capability is key to our definition of intelligence.
This paper explores the key concepts and principles of AI exploring the differences between AI broadly, approaches that model human thinking, such as a cognitive systems approach, and Machine Learning (ML) that depend heavily on data. We’ll also explore key areas of concern to Talent Management professionals. Specifically, employees’ fears of being replaced and bias.
What is Intelligence?
Before we dive into a discussion about AI, it’s important to understand what we mean by intelligence. Philosophers have been debating the meaning of intelligence for millennia and scientists for centuries. There is no single or standard definition for intelligence. Still, in this paper, we’ll adopt a simple definition inspired by a cognitive systems view of intelligence that defines it as something that involves memory, reasoning, and the ability to learn and adapt.
What is Artificial Intelligence?
Movies like Terminator, I, Robot, and the series Westworld have shaped the public’s understanding of the subject along with their experiences with common “intelligent” household devices like the Roomba or from driving autonomous vehicles made by companies like Tesla. But when you watch a Roomba navigate around obstacles like the legs of a chair while vacuuming your floor or if you have experienced the limitation of a self-driving car attempting lane changes in heavy traffic, you quickly realize the true meaning of “artificial”: it’s not quite as good as the real thing.
“Artificial Intelligence” was coined in 1956 by John McCarthy at Dartmouth College during the first academic meeting on whether or not machines can think. Nobody doubted the greater speed and efficiency of computers performing certain mathematical tasks over humans, but are computers actually thinking? The jury is still out on whether or not the past 64 years of research have yielded machines that can think as we understand the meaning of that word today, but there has been much progress in understanding how humans think and what constitutes intelligence.
Artificial Intelligence vs. Machine Learning
We often hear AI and ML used interchangeably. ML is a branch of AI concerned with modeling the real world from data. The data are “features” of the thing being modeled. For example, a model that predicts a person’s sex, for example, will use other characteristics of the person such as their name, height, weight, income, date of birth, hair color, etc. to determine the likelihood that the person is a male versus a female (note: we use “determine the likelihood” rather than “determine with certainty” to describe the outcomes since ML models are probabilistic, not deterministic). The ML agent would have been trained by a data scientist or data engineer who used data from persons whose sex is known. Plugin the data for unknown sex, and voila, the probability that the person’s sex is one or the other pops out the other side! This is a silly example, but it hopefully illustrates the concept.
There are many well-known ML algorithms like linear regression, logistic classification, bayesian inference, and dozens more. Some algorithms learn by supervision, i.e., require a human to train, and others that learn on their own. Regardless, ML depends on data and lots of it to work. All ML is AI, but not all AI is ML.
Automation Agents and Automatons
The quest for general artificial intelligence or the creation of sentient artificial intelligence, like that depicted in Westworld, is still the stuff of science fiction and far from our reach. On the other hand, an AI that can direct a robot to vacuum a floor or classify animals’ images is readily available today and accessible to consumers with little or no knowledge of AI. In this sense, calling a Roomba or an image classifier an “AI” is like calling yourself a member of the human race. While true, it’s too broad a classification to be useful.
Let’s consider giving AIs that perform specific tasks a more meaningful name: Automation Agents. Further, let’s think of an Automation Agent as a black box that takes input (numeric, image, or sensor data, for example) and does something with that input that affects an output, such as predicting a value, classifying an image, throwing a switch, and the like. Its “intelligence” might be hardcoded with knowledge from experts like in a medical diagnostic system, or it can be trained with data, for example, from millions of examples of images of stop signs to determine if there’s a stop sign in its field of view and then apply a vehicle’s brakes.
At this point, our definition of intelligence, when used in the context of a system that exhibits skills and capabilities like that of a human expert, falls short of meeting the criteria of our cognitive systems model: something that involves memory, reasoning, and the ability to learn and adapt.
The Intelligent Learning Agent
Consider a medical diagnostic system that can’t learn new diagnoses or adapt to new information about disease: it will hardly be useful a year from now or two. Would you trust such a system? Would you consider it intelligent? Our contention is that to be considered intelligent, real, or artificial, the agent, human, animal, or otherwise, must possess the capacity to learn and adapt. We believe that this is the basis of intelligence. Perhaps we need to be a bit more precise and call our agent an Intelligent Learning Agent or just Intelligent Agent for short.
A demonstration of learning can be as simple as a Roomba, demonstrating that it can navigate a floor it has never traversed using its sensors to build a map that it can later reference to optimize a cleaning route on its next run. The route it takes on subsequent runs is informed by the map and its memory of areas that tend to be dirtier than others. Each run is a bit shorter than the previous until it maximizes its efficiency. Move the furniture around, and you’ll observe the Roomba regressing to an earlier state. Is it learning? Certainly. Is it intelligent? Maybe. Is the ability to learn and adapt sufficiently to demonstrate intelligence? Perhaps, but it is necessary. Without it, the agent’s intelligence is fleeting at best.
By our definition, an Intelligent Agent can receive or gather input from outside the confines of its black box, apply one or more of hundreds of algorithms to make decisions and affect actions, it can learn and adapt, and it can remember what it has learned so that it can apply this new knowledge and understanding in the future.
Fear, Uncertainty, and Doubt
While we believe that there are many useful and practical applications of AI in Talent Management, it’s important to keep in mind that the technical buyer (the domain expert who is selecting the solution based on “fit-for-purpose”) may have concerns about the potential for an AI-based solution to be at odds with their goals. Consider the case where the problem to be solved is engagement or retention, problems that my company TalentGuard solves. How might the buyer perceive AI as an aid or as something at odds with such goals? A 2018 Gallup survey found that 73% of adults believe that AI will replace more jobs than it creates, and 23% are concerned about losing their job to an AI.
Bias in AI is well known. This is of particular concern in Talent Management. Consider the case where data about current demographically similar employees is used to build a model to predict the performance potential of job applicants from demographic groups that are distinctly different from the model’s training cohort. Will such a model make predictions that lead to hiring decisions that exclude otherwise qualified candidates? For example, “In 1988, the UK Commission for Racial Equality found a British medical school guilty of discrimination because the computer program it was using to determine which applicants would be invited for interviews was determined to be biased against women and applicants with non-European names.”
Bias is the Achilles heel of Talent Management solutions that employ AI in any form or fashion, and it must be addressed explicitly. It must be removed systematically through product design and engineering, test, and maintenance of our solutions. The product team (design, engineering, and test) and data science team must define a formal system to identify and eliminate bias from our designs, code, training data, and the like. At TalentGuard, our goal is to educate buyers about the risk of bias in AI-based solutions and how best to evaluate a vendor’s anti-bias methodology. And, our sales team is equipped to educate and challenge a prospect’s assessment of competitive solutions that don’t employ a well-defined anti-bias methodology.
AI under GDPR
The General Data Protection Regulations or GDPR is a regulation designed to protect the privacy of citizens of European Union (EU) member states:
“REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).”
Article 22 of the GDPR is of particular interest to software vendors employing AI/ML in their solutions. Article 22 is colloquially referred to as the “profiling” regulation and specifies the following:
- “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”
- “…the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.”
To comply with this regulation, we must defend the actions of AI/ML algorithms that we employ in our products that “significantly affect” the employee. For example, if we implement an algorithm that recommends one employee over another for promotion, we need to justify the recommendation if the employee who was passed over for promotion contests the decision. It’s unlikely that an explanation like, “we ran millions of rows of data through an n-layer neural network and it concluded that you are not ready for promotion” will suffice.
Talent Management Systems implement complex workflows and automate various human capital management tasks that are difficult, if not impossible, to implement without a software system’s aid. For most of these tasks, however, classic programming approaches to workflow management and automation are sufficient.
Best-in-class software vendors look for opportunities to apply AI to those areas of their products that will deliver value to the customer that would otherwise be difficult, if not impossible, using conventional approaches. Problems that are “computationally intractable” or require highly specialized skills and expertise outside the system that is difficult and often very expensive to acquire and apply.
Seek out vendors who understand that AI for AI’s sake is not the best approach. Beware of those vendors who lead with an “AI” story. Finally, embrace vendors who never take the “human” out of human capital decision-making and development.
If you would like to understand how TalentGuard utilizes AI, request a demo of your product of interest today and see for yourself.
For more information on AI-assisted career architecture, take a look at this webinar: