Before dying by suicide in 2012, a student from Nanjing in eastern China made a final post on Weibo saying goodbye. It quickly became an online gathering place for the depressed, garnering millions of responses. Users called it a “shudong,” or “tree hole,” for things they couldn’t say out loud. And nine years on, it still draws new comments every day.

Each of these comments is a data point for the Tree Hole Rescue Project, an organization that uses artificial intelligence to scan Weibo posts with the aim of identifying users at risk of hurting themselves.

The algorithm, designed by Zhisheng Huang, a Chinese computer scientist based in the Netherlands, flags posts to volunteers in a WeChat group who stand ready to message users, call their relatives or their employers — and, occasionally, alert the police.

Initiatives seeking to harness artificial intelligence for mental health problems have abounded in China in recent years, part of a broader embrace of digital technology that has, among its effects, given rise to TikTok and made cash payments nearly obsolete. Projects like Huang’s, which scour people’s social media accounts, have won awards and been praised by state-run news outlets, prevailing over thorny questions of privacy and efficacy that have slowed similar initiatives in the United States and Europe.

Huang, a Guangdong native who has been studying AI for 30 years, said he thinks his algorithm helps to reach the needy in a country where mental illness is widely stigmatized, and where, in 2019, more than 116,000 people died by suicide. But the gaps in China’s mental health resources are deep and systemic, experts say, and there’s little clinical evidence that AI-powered solutions are effective.

The Tree Hole Rescue Project started in 2018 and has grown from about two dozen volunteers — mostly AI enthusiasts — to 700 people across China. Every day, the algorithm flags about 100 comments on popular “tree holes” like the Nanjing student’s suicide note, which was briefly scrubbed by Weibo, then reposted.

“Suicide risk: 7. Male. City: Fujian, Xiamen. Age: 29. ‘The decision of whether to jump off a building is difficult.’ ”

“Suicide risk: 5. Female. City: Jiangsu, Yangzhou. Age: Unknown. ‘Actually, I still want to die, I just can’t think of how.’ ”

The volunteers are typically able to reach only one-tenth of those considered at-risk — a total of about 14,000 people over three years, Huang said.

“It feels like a noble thing to do,” said Yuanyuan Yang, 37, a teacher from Henan province who says she spends as much time as she can volunteering. “It’s meaningful.”

In one of the project’s earliest cases, a man posted that he was closing his windows in preparation to die by suicide, Huang said. Volunteers tracked down a phone number for his self-run company and reached his mother, who was in the adjoining room. She was enraged at the claims that her son was hurting himself, Huang said, until she went into his bedroom and realized they were right.

But individual anecdotes aren’t evidence that a tool is clinically useful, said Xiaoduo Fan, a psychiatry professor who leads the China Mental Health Program at the University of Massachusetts Medical School.

Mental health resources in China have grown since the Cultural Revolution, when mental illness was dismissed as bourgeois delusion. But according to the World Health Organization, there are still only about 2 psychiatrists per 100,000 residents in China, compared with 10 in the United States and 13 in Germany. In 2019, a Shanghai research center said there was a shortage of at least 400,000 mental health professionals.

“There’s like a few drops of rain for this large country,” Fan said. “The potential [of AI], in theory, is very promising. But there are so many problems in implementation.”

Fan runs an exchange program with Chinese psychiatrists and said that during recent trips to the country, crews of self-proclaimed mental health innovators trailed his delegation from city to city, pitching their apps and algorithms in hopes of getting a psychiatrist’s endorsement.

“There’s a whole obsession. It’s like overexcitement,” Fan said. “There’s this fantasy of ‘wow, this is going to solve all our problems.’ ”

Hundreds of Chinese apps have popped up in recent years promising to provide support or healing through chatbots, virtual reality or other newfangled technologies. Many of the country’s large Internet companies have launched their own psychological health projects. And in 2019, the e-commerce giant Alibaba set up a “Guarding Life” team that intercepted orders deemed to be problematic, such as lethal drugs. (Alibaba declined to comment for this story.)

The problem, Fan said, is that in China, there isn’t adequate regulation or transparency to ensure that such tools are used ethically. Even with nonprofit entities such as the Tree Hole Rescue Project, he noted, it is not clear what relationship volunteers have with the people they try to help. China’s National Health Commission did not respond to inquiries about regulations.

Huang said about 100 of his rescuers are mental health professionals; 200 are social workers; and 400 are volunteers who go through in-house training in counseling.

The group doesn’t refer people to long-term treatment, but some volunteers provide years of “companionship,” Huang said. In 2019, a former volunteer told a Chinese language media outlet that some of his peers became so attached to users that they promised to become their romantic partners or help them raise money. That used to happen in the early days of the project, Huang conceded but said it no longer happens. There’s now a 130-page guidebook that explicitly bans it, he said.

Stella Zhou, co-founder of a Wuhan-based nonprofit that provides art therapy, said her organization once tried using an AI-powered mental health chatbot provided by the China Association of Social Workers but stopped after three months. Conversations often broke down because clients had accents that were different from what was coded into the chatbot, she said.

“When users have to repeat their issues again and again, sometimes it may even intensify their negative emotions,” Zhou added.

In the United States, suicide-prediction algorithms are still largely being researched, not applied, experts say.

The Department of Veterans Affairs is studying a new program, but VA doctors emphasize that the algorithm’s results do not constitute a prognosis and only supplement what doctors already know about patients.

When Facebook, which is banned in China, said it was starting to screen posts for suicide risk, psychiatrists in the United States and Europe pushed back loudly, arguing that the company’s methods were not transparent, could trigger unwarranted interventions and were being employed without user consent.

John Torous, director of the digital psychiatry division at Beth Israel Deaconess Medical Center, said there’s also often the risk of algorithmic bias in AI-driven treatment. Huang said his AI disproportionately flags posts from women, even though, according to WHO data, the rate of suicide in China is higher for men than for women.

“We just haven’t seen evidence that these systems work,” Torous said. “Maybe [AI] can help augment care. But it can’t replace care that was never there before.”

But in China, people who are suffering are eager to embrace virtually all proposals that promise relief — even if they come with unresolved questions, Fan said. The needs have only deepened during the pandemic.

“People are desperate for anything they can find,” Fan said.

A high school student in Fuzhou, speaking on the condition of anonymity to discuss a sensitive subject, said that she was diagnosed with depression two years ago but that her mother threw away her pills. She recently failed China’s notoriously tough college-entrance exam — the gaokao — and has found herself crying unstoppably. Alone at home, she dwells in online “tree holes,” where at least, she said, “everyone is leading the same painful life.”

If you or someone you know needs help, call the National Suicide Prevention Lifeline at 800-273-TALK (8255). Crisis Text Line also provides free, 24/7, confidential support via text message to people in crisis when they text to 741741.

Lyric Li in Seoul contributed to this report.