The halt marks an abrupt about-face for a service many had criticized as a disturbing symbol of the growing role algorithms are playing in judging and predicting human behavior.
Predictim — one of several AI services now being used to analyze job candidates — scanned babysitters' social media and offered automated assessments of the sitters' personalities, including their risk of having a “bad attitude.”
Company executives, who did not respond to request for comment early Friday, had contended the social-media-scanning service was a critical tool for helping parents stop risky or “born evil” babysitters before they could get close to their kids. They had previously argued that the service had controls against bias and privacy risks, and that public criticism was misguided.
The Post spoke with several parents who said the personality screenings had influenced their thinking about a babysitter’s character, as well as a babysitter who was stunned to learn that the automated system had flagged her as an elevated risk for bullying and disrespect.
Facebook, Twitter and Instagram blocked much of the service’s access in recent weeks, saying its social-media scans had violated rules on user surveillance and data privacy. Company executives said late last month that they were undeterred by the restrictions and intended to begin incorporating even more data, such as babysitters' blog posts, into its analyses.
Kate Crawford, a researcher and co-founder of the AI Now Institute, called the service “error-prone, based on broken assumptions, and privacy invading. What’s worse — it’s a horrifying symptom of the growing power asymmetry between employers and job seekers. And low wage workers don’t get to opt out.”