Enhance Employee Growth with Integrated Career and Performance Plans

The New Performance Appraisal

TalentGuard AI/ML Buyers Guide for HR

AI Buyers Guide for HR: AI in Talent Management (#1 in a series)

TalentGuard AI/ML Buyers Guide for HR

by Frank P. Ginac, CTO at TalentGuard

“AI is all about figuring out what to do when you don’t know what to do.”
— Peter Norvig

“It is usually agreed that a system capable of learning deserves to be called intelligent, and conversely, a system is considered as intelligent is, among other things, usually expected to be able to learn. Learning always has to do with the self-improvement of future behavior based on experience.”
— Sandip Sen and Gerhard Weiss

Introduction

Welcome to the first article in my series, AI Buyers Guide for Human Resources (HR) Professionals. My objective for this series is to arm HR professionals responsible for selecting, deploying, and managing AI-based HR Tech solutions in the enterprise with the knowledge they need to perform these tasks confidently. The information shared here is not just of value to HR professionals but also generally applies to any buyer of AI-based software. I hope you find the information helpful and welcome your feedback and comments. I would greatly appreciate it if you’d share this article and the others in the series with your network.
The application of Artificial Intelligence (AI) in HR is obscured by marketing hype and misinformation. Much of what is called AI in the marketplace is much less than what researchers and practitioners consider “intelligent.” That doesn’t mean solutions that bear the moniker aren’t useful. Still, they are misleading HR technology buyers who believe AI is a panacea for many of their most challenging talent management problems.
However, there are many promising applications of AI in Talent Management that are worth investigating further. For example, finding the best match from thousands of applicants for a handful of job openings or providing insights about an applicant’s personality by analyzing their facial expressions during video interviews shows great promise. They automate labor-intensive, error-prone, and routine tasks (like resume matching) and reduce risk by ensuring candidates fit a certain psychological profile. They learn and get better at their tasks over time and without human intervention (at least, with very little human intervention). This learning and adapting capability is key to our definition of intelligence.
This paper explores the key concepts and principles of AI exploring the differences between AI broadly, approaches that model human thinking, such as a cognitive systems approach, and Machine Learning (ML) that depend heavily on data. We’ll also explore key areas of concern for Talent Management professionals. Specifically, employees’ fears of being replaced and bias.

What is Intelligence?

Before we dive into a discussion about AI, it’s important to understand what we mean by intelligence. Philosophers have been debating the meaning of intelligence for millennia and scientists for centuries. There is no single or standard definition for intelligence. Still, in this paper, we’ll adopt a simple definition inspired by a cognitive systems view of intelligence that defines it as something that involves memory, reasoning, and the ability to learn and adapt.
What is Artificial Intelligence?

Movies like Terminator, I, Robot, and the series Westworld have shaped our understanding of the subject along with experiences with common “intelligent” household devices like the Roomba or from driving autonomous vehicles made by companies like Tesla. But when you watch a Roomba navigate around obstacles like the legs of a chair while vacuuming your floor or if you have experienced the limitation of a self-driving car attempting lane changes in heavy traffic, you quickly realize the true meaning of “artificial”: it’s not quite as good as the real thing.

“Artificial Intelligence” was coined in 1956 by John McCarthy at Dartmouth College during the first academic meeting on whether or not machines can think. Nobody doubted the greater speed and efficiency of computers performing certain mathematical tasks over humans, but are computers actually thinking? The jury is still out on whether or not the past six decades of research have yielded machines that can think as we understand the meaning of that word today, but there has been much progress in understanding how humans think and what constitutes intelligence.

Artificial Intelligence vs. Machine Learning

We often hear AI and ML used interchangeably. ML is a branch of AI concerned with modeling the real world from data. The data are “features” of the thing being modeled. For example, a model that predicts a person’s sex, for example, will use other characteristics of the person such as their name, height, weight, income, date of birth, hair color, etc. to determine the likelihood that the person is a male versus a female (note: we use “determine the likelihood” rather than “determine with certainty” to describe the outcomes since ML models are probabilistic, not deterministic). The ML agent would have been trained by a data scientist or data engineer who used data from persons whose sex is known (something called supervised learning). Plugin the data for unknown sex, and voila, the probability that the person’s sex is one or the other pops out the other side! This is a toy example, but it hopefully illustrates the concept.

There are many well-known ML algorithms like linear regression, logistic classification, bayesian inference, and dozens more. Some algorithms learn by supervision, i.e., require a human to train, and others learn on their own. Regardless, ML depends on data and lots of it to work. All ML is AI, but not all AI is ML.

The Intelligent Learning Agent

Consider a medical diagnostic system that can’t learn new diagnoses or adapt to new information about disease: it will hardly be useful a year from now or two. Would you trust such a system? Would you consider it intelligent? Our contention is that to be considered intelligent, real, or artificial, the agent, human, animal, or otherwise, must possess the capacity to learn and adapt. We believe that this is the basis of intelligence. Perhaps we need to be a bit more precise and call our agent an Intelligent Learning Agent or just an Intelligent Agent for short.

A demonstration of learning can be as simple as a Roomba, demonstrating that it can navigate a floor it has never traversed using its sensors to build a map that it can later reference to optimize a cleaning route on its next run. The route it takes on subsequent runs is informed by the map and its memory of areas that tend to be dirtier than others. Each run is a bit shorter than the previous until it maximizes its efficiency. Move the furniture around, and you’ll observe the Roomba regressing to an earlier state. Is it learning? Certainly. Is it intelligent? Maybe. Is the ability to learn and adapt sufficiently to demonstrate intelligence? Perhaps, but it is necessary. Without it, the agent’s intelligence is fleeting at best.
By our definition, an Intelligent Agent can receive or gather input from outside the confines of its black box, apply one or more of hundreds of algorithms to make decisions and affect actions, it can learn and adapt, and remember what it has learned so that it can apply this new knowledge and understanding in the future.

Fear, Uncertainty, and Doubt

While we believe that there are many useful and practical applications of AI in Talent Management, it’s important to keep in mind that the technical buyer (the domain expert who is selecting the solution based on “fit-for-purpose”) may have concerns about the potential for an AI-based solution to be at odds with their goals. Consider the case where the problem to be solved is engagement or retention, problems that my company TalentGuard solves. How might the buyer perceive AI as an aid or as something at odds with such goals? A 2018 Gallup survey found that 73% of adults believe that AI will replace more jobs than it creates, and 23% are concerned about losing their jobs to an AI.

Bias in AI is well known. This is of particular concern in Talent Management. Consider the case where data about current demographically similar employees is used to build a model to predict the performance potential of job applicants from demographic groups that are distinctly different from the model’s training cohort. Will such a model make predictions that lead to hiring decisions that exclude otherwise qualified candidates? For example, “In 1988, the UK Commission for Racial Equality found a British medical school guilty of discrimination because the computer program it was using to determine which applicants would be invited for interviews was determined to be biased against women and applicants with non-European names.”

Bias is the Achilles heel of Talent Management solutions that employ AI in any form or fashion, and it must be addressed explicitly. It must be removed systematically through product design and engineering, testing, and maintenance of our solutions. The product team (design, engineering, and test) and data science team must define a formal system to identify and eliminate bias from our designs, code, training data, and the like. At TalentGuard, our goal is to educate buyers about the risk of bias in AI-based solutions and how best to evaluate a vendor’s anti-bias methodology. And, our sales team is equipped to educate and challenge a prospect’s assessment of competitive solutions that don’t employ a well-defined anti-bias methodology.

AI under GDPR

The General Data Protection Regulation or GDPR is a regulation designed to protect the privacy of citizens of European Union (EU) member states:
“REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016 on the protection of natural persons about the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).”

Article 22 of the GDPR is of particular interest to software vendors employing AI/ML in their solutions. Article 22 is colloquially referred to as the “profiling” regulation and specifies the following:

  • “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”
  • “…the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.”
    To comply with this regulation, we must defend the actions of AI/ML algorithms that we employ in our products that “significantly affect” the employee. For example, if we implement an algorithm that recommends one employee over another for promotion, we need to justify the recommendation if the employee who was passed over for promotion contests the decision. It’s unlikely that an explanation like, “we ran millions of rows of data through an n-layer neural network and it concluded that you are not ready for promotion” will suffice.

Buying AI

Talent Management Systems implement complex workflows and automate various human capital management tasks that are difficult, if not impossible, to implement without a software system’s aid. For most of these tasks, however, classic programming approaches to workflow management and automation are sufficient.

Best-in-class software vendors look for opportunities to apply AI to those areas of their products that will deliver value to the customer that would otherwise be difficult, if not impossible, using conventional approaches. Problems that are “computationally intractable” or require highly specialized skills and expertise outside the system that are difficult and often very expensive to acquire and apply.

Seek out vendors who understand that AI for AI’s sake is not the best approach. Beware of those vendors who lead with an AI-only story. Finally, embrace vendors who never take the “human” out of human capital decision-making and development.

Continue to read the series:

Unlocking Employee Potential
Unlocking Employee Potential: The Power of Skill-Based Learning in the Corporate World

The need for a skilled and proficient workforce is paramount. Businesses prioritizing employee development are better equipped to adapt to changes, boost productivity, and maintain a competitive edge. One of the most effective strategies for achieving this is through skill-based learning. Skill-based learning focuses on enhancing specific competencies, whether technical or soft skills, to improve […]

TalentGuard Talent Insights
Elevating CEO Insights: Pioneering Data-Driven Talent Strategies

Introduction In today’s ever-evolving business landscape, organizations face a daunting challenge—the widening skills gap. This issue threatens their ability to innovate, stay competitive, and achieve strategic objectives. Traditional talent development methods are no longer sufficient. In this blog post, we delve into the white paper “Elevating CEO Insights: Pioneering Data-Driven Talent Strategies” by Linda M. […]

Refining Skills Assessment
Redefining Skill Assessment and Verification for Continuous Growth

Redefining Skill Assessment and Verification for Continuous Growth Introduction In the ever-evolving landscape of skill development, businesses are facing a profound challenge: traditional skill assessment and verification methods are no longer sufficient in today’s dynamic workforce. This realization has led organizations to seek innovative solutions to assess, verify, and showcase skills accurately and efficiently. In […]