“It definitely suggests that more decision making powers should be given to the machine relative to the humans,” says University of Toronto professor Mitchell Hoffman, one of the report's authors.
To figure this out, the researchers obtained a dataset consisting of 300,000 hires at 15 companies that use job tests for low-skilled positions, such as call center workers and standardized test graders. By measuring how people were assessed initially, whether a hiring manager overruled a low test score to bring them on and how the worker performed later, they found that not only did testing improve job tenure by 15 percent — a key metric, since turnover is so expensive for businesses — but also human intervention was associated with significantly worse results.
Moreover, although the machine-picked workers didn’t turn out to be much more productive than those for whom a hiring manager had stepped in, they weren’t less productive either — suggesting that the recruiters weren’t even making a worthwhile trade-off between a worker’s effectiveness and longevity in the job.
Job tests, of course, have been around for quite a while now. They’re getting better and better at being able to predict someone’s suitability for a given job. So why do HR people still think they know better?
Julie Moreland, senior vice president for strategy and people sciences at Peoplematter, makes those job tests and helps implement them for large corporate clients. Although she recommends that companies continue to do in-person interviews as part of the recruiting process, she figures that about a third of hiring managers don’t place enough weight on the assessments, and thinks that may be in part because inexperienced bosses haven’t been taught how to use them effectively.
“They’re basically being promoted, given a title, maybe given a little more money, and they’re expected to do a job that they have no training for,” Moreland says, noting that managers are now turning over faster than rank-and-file employees. “They don’t know how to do an interview, they don’t know how to hire somebody, they don’t know how to hold a meeting, how to delegate, how to discipline, how to motivate.”
That doesn’t just result in inferior hires. It also allows for greater prejudice to creep into the system. Although screening questionnaires (and particularly behavioral tests) have been faulted for harboring subtle bias in the past, people are much more likely to make decisions based on “cultural fit,” which can serve as a proxy for gender and ethnic homogeneity.
“From a human perspective, we like people who are like us,” Moreland says, explaining the behavior of hiring managers. “They’re not thinking about the job, they’re thinking ‘I can work with this person, I relate to them.' It skews their logic. Anybody that says they do not have bias in their interview is not being real.”
In addition, research has shown that people develop an irrational aversion to relying on algorithms once they have a bad experience, even if they’ve been proven to work most of the time. And of course, there’s the natural human fear of being replaced by a machine, which could happen as more and more data allows computers to become even more precise in their assessments.
The optimistic scenario, for the HR profession, is that humans become repurposed to higher-order needs — data scientists can't make a hiring algorithm without some sector-specific expertise. “What true professional HR providers realize is they’ve taken something and made it more efficient," Moreland says, "and therefore they can spend more of their time on strategy rather than interviewing.”
Or perhaps they share the fate of many of their workers: Increasingly rendered obsolete.