HireVue’s “AI-driven assessments,” which more than 100 employers have used on a million-plus job candidates, use video interviews to analyze hundreds of thousands of data points related to a person’s speaking voice, word selection and facial movements. The system then creates a computer-generated estimate of the candidates’ skills and behaviors, including their “willingness to learn” and “personal stability.”
Candidates aren’t told their scores, but employers can use those reports to decide whom to hire or disregard. The Utah-based company was the subject of a Washington Post report last month, in which AI researchers criticized its technology as “profoundly disturbing” and “opaque.”
HireVue’s “intrusive collection and secret analysis of biometric data” causes substantial privacy and financial harms, EPIC officials wrote. And “because these algorithms are secret,” they added, “ ... it is impossible for job candidates to know how their personal data is being used or to consent to such uses.”
The FTC declined to comment. HireVue did not respond to requests for comment.
The complaint could for the first time throw a federal spotlight on a growing industry of tech firms that advertise automated systems they say can assess candidates’ résumés, divine people’s personalities and pinpoint problematic recruits. Critics say the systems are dehumanizing, invasive and built on flawed science that could perpetuate discriminatory hiring practices.
The technology is also facing increasing pressure from lawmakers. In January, Illinois will enact a law forcing employers to tell job applicants and regulators how their AI video-interview systems work. Co-sponsors of the bill, approved in August by Gov. J.B. Pritzker (D), said they worried the systems could unfairly penalize candidates and hide biases in how they assess a “model employee.”
HireVue’s systems have become pervasive for employers because they can lower recruiting costs and speed up turnaround time for new hires. Some colleges now instruct students on how to impress the hidden algorithms: In the FTC filing, EPIC lawyers quote a guide from the University of Maryland business school, which tells interviewees, “Robots compare you against existing success stories; they don’t look for out-of-the-box candidates.”
EPIC, based in Washington, has become one of the tech industry’s most renowned and effective watchdogs, helping shape U.S. policy around online privacy, civil liberties and domestic surveillance for nearly 25 years. The group has challenged tech giants and government agencies, including Facebook, Google and the National Security Agency, through consumer complaints, agency filings and federal lawsuits.
EPIC urged the FTC to halt HireVue’s automatic scoring of job candidates and make public the algorithms and criteria used in analyzing people’s behavior. The technology is largely unregulated, but the FTC regularly enforces “unfair and deceptive acts or practices” statutes against companies found to be making claims to consumers without a “reasonable basis” in a way likely to “cause substantial injury.”
In its complaint, EPIC officials said HireVue’s AI-driven assessments produce results that are “biased, unprovable and not replicable.” The system, they argued, could unfairly score someone based on prejudices related to their gender, race, sexual orientation or neurological differences. HireVue says it uses “world-class bias testing” techniques to detect and prevent hiring discrimination.
HireVue advertises that its technology does not use “facial recognition technology” because its systems do not attempt to identify people. But EPIC argued that HireVue’s assertion is misleading, and that the FTC has ruled the term applies to any “technologies that analyze facial geometry to predict demographic characteristics, expression or emotions.”
EPIC also argued that HireVue had failed to meet international standards for AI systems set by the Organization for Economic Cooperation and Development and endorsed by the United States earlier this year. HireVue violated those principles, EPIC said, because its algorithmic assessments can’t be evaluated or “meaningfully challenged” by the job candidates they’ve assessed.
The company has not ensured the accuracy, reliability or validity of its computer-generated scores, the complaint added. It has also not “adequately evaluated whether the purpose, objectives, and benefits of its algorithmic assessments outweigh the risks.”