(Peter DaSilva/for The Washington Post)

Scientists’ ability to influence public debate on issues of scientific importance depends upon our reputations, which are derived from the robustness and reproducibility of our results.

The Aug. 28 news article “A study of studies finds it’s hard to replicate scientific results” reported on a large study of scientific reproducibility (or lack of it) in the psychological sciences but extended this conclusion to science in general, quite unfairly, potentially tarnishing the standing of science.

I have examined the results of many researchers in the course of my own work in earth sciences, finding very few erroneous results. Organizations such as the U.N. Intergovernmental Panel on Climate Change go to great lengths to ensure the scientific accuracy of the data incorporated into climate models. I suspect that for many of the hard sciences — physics, chemistry, biology, geology and medicine — results are well supported by reproducible observations. In the softer sciences, such as psychology and economics, perhaps the human element or specific samples make it far more difficult to ensure that verifiable and meaningful results are produced.

To extend the results of the reported study so blithely to science in general without evidence potentially does great harm to otherwise high-quality research necessary for the advancement, improvement and protection of society.

Derrick Paul Hasterok, Windsor Gardens, Australia

It is ironic that the headline for an article about certain scientific studies reaching conclusions that are overly broad, and therefore potentially incorrect, “A study of studies finds it’s hard to replicate scientific results,” is overly broad and, therefore, incorrect. The casual reader is led by this headline to assume that all science is in question when in fact the study examined only a particular class of psychology experiments involving human subjects in controlled situations that, in some cases, were less controlled or controlled in different ways than the researchers may have believed.

The conclusion is that more care may be needed in this type of behavioral experiment if the results are to be reproducible (which is certainly what all parties involved would hope for). By implying that the conclusion is that “it’s hard to replicate scientific results,” The Post does a disservice to its readership and to science.

John T. Fourkas, Bethesda