Scientists took a major step forward in predictive technology this week with the development of a system of blood tests and an app that they say can predict with more than 90 percent accuracy whether someone will start thinking about suicide or attempt it.
In a study published Tuesday, researchers at Indiana University School of Medicine presented details of an app that measures mood and anxiety and that asks people a series of questions about life issues, things like: How high is your physical energy and the amount of moving about that you feel like doing right now? How good to you feel about yourself and your accomplishments right now? How uncertain about things do you feel right now?
They purposely avoided asking any questions about suicide directly. Writing in the journal Molecular Psychiatry, the researchers said that "predicting suicidal behavior in individuals is one of the hard problems in psychiatry, and in society at large."
"One cannot always ask individuals if they are suicidal, as desire not to be stopped or future impulsive changes of mind may make their self-report of feelings, thoughts and plans to be unreliable," Alexander B. Niculescu III, a professor of psychiatry and medical neuroscience at Indiana University, and his co-authors wrote.
The researchers separately studied a group of 217 males who had been diagnosed with bipolar disorder, major depressive disorder, schizophrenia and other psychiatric issues. About 20 percent went from no suicidal thoughts to a high level of suicidal thoughts while they were being seen at a clinic at the university.
By analyzing their blood samples, the researchers were able to identify RNA biomarkers that appeared to predict suicidal thinking.
They wrote that it's unclear how well the biomarkers would work in the larger population due to the fact that the study was limited to high-risk males with psychiatric diagnoses, but that the app is ready to be deployed and tested on a wider group in real-world settings such as emergency rooms.
That idea may advance medical science, but it's unclear what would happen if there's a flaw in the algorithm, or if the training given to people using the tool is inadequate and they over- or underreact to its conclusions.