See a Demo

The Hiring Human Blog

22 min read

3 Things every HR practitioner should know about "AI"​

Jun 27, 2019 2:10:00 PM

I’m sure it’s only a matter of time before I see an ad for an “AI” driven pencil sharpener. “AI” is the techmarketing buzzword du jour, and it seems we’re fated to swim for some time in a tidal wave of “AI” hype.
 
The only way to avoid the hype is to live offline, perhaps in a geographically remote location, say the Hindu Kush area that straddles Afghanistan and Pakistan. Which is exactly where my journey with “AI” began.
 
In 2002-3 a few governments had a real need to be able to understand tribal relationships in that region, and I had the opportunity to work on a team that addressed the problem, using tech that you now know as “AI”.
 
These were all intensely smart people. One teammate managed IBM’s AI lab back when the thing we now know as Watson was born. Between them, they held a bunch of patents related to the thing we now hear about all the time as “machine learning”. I was never, ever going to be the smartest person in the room when that team was together, which is what made it such an awesome experience.
 
These days I’m not focused on tribal relationships, but on helping companies build talent networks that allow them to find and hire great people. I’m still not the smartest person in the room, but I do have some perspective on “AI” and how it’s applied in HR and Human Capital Management (HCM). Here are three things every HR practitioner should keep in mind when it comes to “AI”:
 

1. Always use “AI” in quotes

 
“AI”, like all tech marketing buzzwords, doesn’t really describe one technology, or even one approach to technology, but rather several that can be, and often are, intertwined.
 
For example, people call Siri and Alexa “artificial intelligence”, and I’ve also heard people describe chatbots as “AI”. Of course what they have in common is that they’re designed to converse with humans. Let’s refer to these natural language technologies, whether spoken or text based, as “conversational UI”.
 
Conversational UI gives the privilege of interacting with the computer on human terms, and makes it easier for us to interact with a device. But to be called “intelligent” it needs to do much more than converse. Am I the only one who remembers Teddy Ruxpin? By any common sense definition, “intelligence” requires the ability to observe facts and conditions, and to make assumptions and inferences from them.
 
For example, I might ask Siri to set up a meeting for me Wednesday at 10am. It should observe whether I’ve already got something scheduled at that time, and assume that I don’t want to book myself overlapping meetings.
 
It will observe the location of the new meeting, as well as the location I’ll be at before that meeting. It can infer the transit time in between, and assume that I may want my calendar to reflect that travel time. And of course if I tell it not to book the travel time as “busy”, the application should infer (or if you prefer, “learn”) from my choices and adjust going forward.
 
If you’re saying “DUH . . . . those assumptions and inferences are already supported in most calendar software”, right on.
 
That’s the point: the only thing “Siri” is contributing is the conversational UI bit. The “hard and bony” part, including the inference and assumption logic, as well as the data, are all in Calendar.
 
In the calendar example, the application observed and connected certain data points, and then responded to my preferences on the fly. For the sake of having a name, that “hard and bony” part, based on data, assumption and inference, and observation of user behavior, let’s call that machine learning. Machine learning is the proper name for what most people mean when they say “artificial intelligence”, and it’s actually the harder part to get right technologically, because it’s far more data and logic intensive.
 

2. Garbage in, garbage out (the data is the hard part)

 
In the calendar example, we’re talking about a minuscule data set: date, time, and location. (One might pedantically argue that the data set is larger, because to calculate travel time you’d need a location database to calculate the distances, but that would live outside the calendar application itself.)
 
Machine learning on a larger scale relies on logic programmed into it by its designers, and relies heavily on “training sets”, which are collections of data designed to help the machine get a baseline that will allow it to see patterns and make predictions. When it’s time to talk about machine learning at scale, the data sets become quite large. (And the volume of inference and assumption logic required expands dramatically; more on that in point 3.)
 
So if you’re an HR practitioner trying to navigate the “AI” hype, you need to know a few things about data science.
 
It’s a hot area right now and there’s tons of reference material to review, but for our purposes there are three pitfalls to be aware of in any tech that claims to use “AI” for HR:
 
Incomplete or biased data: Even a supremely well designed set of assumptions and inferences is going to fail if you don’t have enough relevant facts available for the “intelligence” to reach conclusions that are both true and precise. The classic, almost meme-like AI fail for this is Watson for Oncology.
 
The moral of the story: when it comes to training sets, if you’re not including the right data, at the right scale, you’ll often get results that are spurious at best, dangerous at worst. Keep in mind oncology is a “hard” science, which should admit of certainty and precision. The data is by definition going to be fuzzier and less precise in a “soft” discipline like psychology . . . or human resources.
Correlation, conditionals, and causality: It’s one thing to say “A and B are both present”. It’s a different thing to say “If A, then B”. It’s a very different thing entirely to say “A caused B”. Machine learning technology does a fabulous job at the first two, and can be used to sift terabytes of data to look for correlated facts and conditional relationships. But as data science and AI are pushed further and further, the distinction between correlation, conditionals and causation is often overlooked.
 
The distinction between correlation, conditionals and causation is often overlooked.
 
In the HR use case, we’re talking about people, who make choices, which tends to throw deterministic causal connections out the window.
 
As a crude example, you might have correlated facts that show home-office based employees perform better (“A and B are both present”). You might run a test and have some additional folks try home officing, and see that their performance went up (“if A then B”). But even with a huge scale data set, it would be imprudent to jump to the causal conclusion “home office makes people perform better”. (At least until you’ve isolated all of the variables. For example maybe the manager reviewing the performance of the home office based people was lenient, or the metrics used weren’t consistent. See point a above.)
 
Simpson’s Paradox: which states that some data sets that have clear and observable causality and correlations when viewed alone, go utterly wonky when combined.
 
I have one anecdotal but directly HR related example: a company wanted to hire a group of developers that would reflect the diverse demographic of the markets they served. Their recruitment software was programmed to search across all of their candidate sourcing databases globally . . . . yielding a list that was 95% male, and 99% Indian.
 

3. Even if the data is right, it may not be a truly intelligent, learning system


There are exceptions, but in general, all forms of machine learning rely on those training sets that I mentioned above in the section about data. Those training sets form the “infant knowledge” that will be the basis of all further machine learning.
As was said in the previous section, these training sets are important from a purely data perspective; they have to be in place before the system can start forming the assumptions and inferences that constitute “intelligence”.
Most of the learning to be had in the form of inference and assumption is exceptional. Not in the sense of it being especially good, but in the sense that a learning system observes exceptions to what it learned in the training process.
 
What this means is that even if you get the “infant knowledge” right, with appropriate data scale and inclusion, even if your data sets don’t go all Simpson Paradoxical on you, you’re still not guaranteed a truly intelligent, learning system. The system’s ability to learn will only be as good as the logic that is programmed into it in terms of observing exceptions to the initial training set data.
 
As an analogy, let’s say you’re given 1000 blue marbles, and you’ve been fully trained to recognize blue marbles. Unless you also know what “marble” and “yellow” are, you won’t be able to recognize yellow marbles as exceptions to the blue marble definition.
 
You might see that it’s a marble; you might see that it’s “not blue”. But you won’t see it as a “yellow marble”, and won’t be able to learn “yellow marble” until someone teaches you the right set of inferences and definitions. An example comes from Facebook, with the bots Alice and Bob. A far more tragic example is Uber’s attempt at a self-driving car(1).
 
I’m no Luddite. I have a huge amount of faith in software designers and programmers to get complex things right. But the purely human concepts that are relevant in HR, things like ethnicity, self identity, engagement, and job satisfaction simply don’t lend themselves well to algorithms and logarithms. For a somewhat dark example of this, consider Microsoft’s Tay chatbot.
 

Putting it all together . . . . where might true AI really fit in HR?


The typical model for the employee journey starts at talent search and moves through candidate management, on boarding, performance management, and talent management. Where on this spectrum does AI offer the most promise? Where does it present the greatest risks?
 
An employee of 5 years’ seniority has built up a very large data set that could be treated by AI. This would obviously include input from managers given during the performance management and development processes, but it goes far beyond that.
 
Increasingly, sophisticated companies are tapping into “exhaust data” that gives insight into employee behavior. This would include things like hours spent logged in to company systems, number of emails sent, times emails were sent, and internet search history. It could include data from whatever workflow systems an employee works in, which might include a CRM, call center software, an order entry system, a loan origination system, or a host of other point applications. That data set is large and complete enough that AI could yield fruitful discoveries related to what makes one employee successful, and what causes another to quit in disgust.
 
But at the beginning of the employee journey, when in the talent search and acquisition phase, you have a small data set related to any individual potential candidate. You might have access to college transcripts, a CV, and not much else. That data set is also highly subjective (in data science terms: “biased”) since much or all of the data is provided by the candidate themselves.
 
At the talent acquisition phase, you also have far less basis for making truly logical assumptions and inferences about a candidate’s potential. Even a well designed and well built system will have great difficulty reliably predicting which candidates are going to pan out and which won't. Humans make choices, and we can measure them as much as we like, we'll always be limited in terms of our ability to predict them.
 
There’s also some risk in relying on assumptions and inferences at all at the early, critical stage of the employee journey. The stakes are high as you represent your company and your brand to someone who you’re trying to retain, and the potential for creating “anti-champions” is high. As noted earlier, purely human concepts don’t “math” well, especially the critical concepts related to diversity and inclusion, like ethnicity and gender identity. Given the sensitivity on these topics, it seems far more prudent to have candidate self-selection play a role in the process.
 
As “AI” has made its presence felt in the HR space, a counter-movement has developed, with many HR practitioners pointing out that the H in HR stands for “human”. It’s inevitable that AI will continue to get traction in HR and HCM, but the sophisticated HR professional will be well served to know which areas of the employee journey lend themselves to AI, which are the ones best left to humans.
 
It’s my hope that this article helped you, or at least provoked some thought, in that regard. I’m always interested in continuing the discussion: mark@turazo.com.
Mark Pheifer
Written by Mark Pheifer

Growth-oriented strategic thinker and doer.

Post a Comment

Featured