Jessie Battaglia holds her son, Bennett, in their home in October. She used a Predictim scan to automatically analyze a babysitter's personality. (Kyle Grillot for The Washington Post)

Predictim, the artificial-intelligence start-up that sold automated “risk” screenings of babysitters, says it has put the service on “pause,” citing heavy backlash following a Washington Post report on the service last month.

Predictim’s website says the company has been “overwhelmed” by feedback and would indefinitely postpone its full launch while its executives “focus on evaluating how we offer our service and making changes to address some of the suggestions we received.” The site said it would not generate new scans and would be offering refunds.

The halt marks an abrupt about-face for a service many had criticized as a disturbing symbol of the growing role algorithms are playing in judging and predicting human behavior.

Predictim — one of several AI services now being used to analyze job candidates — scanned babysitters' social media and offered automated assessments of the sitters' personalities, including their risk of having a “bad attitude.”

Company executives, who did not respond to request for comment early Friday, had contended the social-media-scanning service was a critical tool for helping parents stop risky or “born evil” babysitters before they could get close to their kids. They had previously argued that the service had controls against bias and privacy risks, and that public criticism was misguided.

The Post spoke with several parents who said the personality screenings had influenced their thinking about a babysitter’s character, as well as a babysitter who was stunned to learn that the automated system had flagged her as an elevated risk for bullying and disrespect.

Facebook, Twitter and Instagram blocked much of the service’s access in recent weeks, saying its social-media scans had violated rules on user surveillance and data privacy. Company executives said late last month that they were undeterred by the restrictions and intended to begin incorporating even more data, such as babysitters' blog posts, into its analyses.

Kate Crawford, a researcher and co-founder of the AI Now Institute, called the service “error-prone, based on broken assumptions, and privacy invading. What’s worse — it’s a horrifying symptom of the growing power asymmetry between employers and job seekers. And low wage workers don’t get to opt out.”