It was just the kind of appealing paradox that people love to read about: a study published in a prestigious science journal in 2013 found that a good way to predict which musicians would win at elite competitions was not by listening to them, but watching them on mute.

NPR, the Economist and the BBC ate up the finding and its compelling backstory. The Harvard-trained psychologist, who did the work solo, said she was inspired by her own experiences performing piano.

“For the last two decades, I’ve taken part in various competitions,” Chia-Jung Tsay told the Harvard Gazette. “Through this experience, I found that depending on what type of evaluations were used — whether it was live rounds or audio recordings that had to be submitted — the results might vary widely."

The work also caught the attention of a group of researchers in Boston who were interested in figuring out whether there were ways to discern in early musical performances hints of later success. When Tsay's study was published, suggesting that well-trained musicians' eyes were better than the ears at picking winners, the result was so fascinating that Boston College undergraduate Dan Scannell and his advisers decided it would be interesting to explore it further. Maybe there were nuances to the visual effect that would help them crack open their own scientific question.

Thus begins an epic and absurd saga, comprised of more than a year and a half of e-mail back and forth, underpinned by growing frustration and annoyance on both sides, that ultimately arrives nowhere. Spoiler alert: two years after the study came out, Scannell has graduated. His advisers, Ellen Winner, chair of the psychology department at Boston College, and Samuel Mehr, a graduate student at Harvard University, still haven't able to obtain all the videos they need to repeat the experiment to see if its striking result holds up.

The importance of repeating experiments gets lots of praise from scientists. It is crucial, because the whole point of science is to use experiments to discover larger truths. For a finding to have any significance at all, it has to be true more than just the one time the original researchers did it. Study findings can be wrong by chance. A major result may depend on a subtlety in how the experiment was conducted. Very rarely, there is fraud or misconduct.

Leaders in fields ranging from biomedical research to social science have said it again and again: replication matters. What the Boston group's travails highlight is the dirtier backstory: In practice, replication can be an aggravating, sometimes unpleasant, mostly unrewarding exercise that makes no friends, brings little prestige and can consume insane amounts of time, resources and effort.

The researchers started in 2013. They were not using the exact same videos of musical performances that Tsay used, but simply setting up a similar experiment to show that looks mattered more than sounds. The results came up flat, Mehr said -- a null result. At least the way they had set up the experiment, there was no effect: The visual information didn’t seem to affect people's ability to choose correctly, regardless of their level of musical skill.

In science research, it's not uncommon to get a conflicting result, since there may be differences in how experiments are set up. So in February 2014, they e-mailed Tsay -- an assistant professor at University College London -- to request her data and the original videos. They wanted to replicate her experiment exactly. In May, Tsay shared her data. The Boston researchers said they wanted the original video recordings, too. She sent them links. But three of the videos she used were no longer available online.

The researchers asked her to share the video files, not just the links. They offered to let her use their Dropbox account. She told them she didn’t own the copyright, and couldn’t hand the videos over. She sought legal advice. The researchers in Boston enjoined the help of the editors of the Proceedings of the National Academy of Sciences, the journal that had published the original paper.

The Boston researchers were told they could travel to London and watch the video clips on her computer. But this offer didn't really present a solution, since they wanted to repeat the experiment by allowing hundreds of research participants to watch the video clips.

Months later, they've basically reached an impasse.

"I fully agree that the availability of materials for replication studies is important for the progress of science," Tsay wrote in an e-mail, detailing her efforts to share her data with the Boston research team. While the videos she used in the experiment were publicly available online when she did the work, three of them had been taken down by 2015.

"Upon learning of this, I looked into alternate ways of sending the files to the group. Video hosting sites indicate that uploading requires that the individual must be the copyright owner. I am not the copyright owner," Tsay wrote in the e-mail. "Legal counsel at University College London and the University of Pennsylvania both advised me that although my own use of the videos for research fell within fair use, I am not legally permitted to distribute copies of the videos, as I am not the copyright holder."

Winner says she doesn’t want to single out Tsay, who is following legal advice. But Winner does blame the journal for publishing a paper that can't be replicated. The persistence it has taken her team to get to this point shows the barriers that line up against researchers who want to do something that scientists are supposed to celebrate.

"It’s a huge amount of trouble. I don’t know if I’ll ever do it again, because it’s exhausting to do, but I think it’s really important. I think what I come away thinking is the journal absolutely needs to figure out that everything they publish has to have materials that are shareable," and if it is not, they should not publish the work, Winner said.

Daniel Salsbury, the deputy executive editor for PNAS, said in an e-mail that the journal was considering attaching a note to the paper explaining that the original materials that underpin the research are not all available.

"Because of the copyright restrictions for all the videos in the study, most particularly the 3 videos in question, which came to light after publication, we understand that the author is unable to distribute them," Salsbury said. "Given the ephemeral nature of materials available on the Internet and due to the complexities of social media (Instagram, Twitter, Facebook, etc.), we have since made the PNAS submission requirements more explicit regarding data availability and require authors to explicitly state any restrictions on the availability of data in their papers and to include a statement in their papers informing readers how to access the data."

One way to spin this story is as a narrow teaching moment for science; it highlights the problems that come with using materials hosted on the Internet in experiments. Another is to see this episode as drawing back the curtain on a discipline in which the culture and incentives are aligned to discourage people from carrying out one of the most fundamental activities in research. Mehr and Winner shared with the Post dozens of e-mails from over the past year and a half, which show how their efforts quickly get bogged down in concerns other than the integrity and validity of the finding.

If anything, recent studies suggest they are not alone.

A study published in the journal Collabra recently found that when requesting the original data from 394 psychology papers published in 2012, only 38 percent of the authors complied right away or after a reminder.

Earlier this year, a shocking study in the journal Science found that of 100 studies, 61 didn't replicate. Those kinds of results grab headlines, but leave out what may be just as important as whether any individual result bears out: It shows what a royal pain it can be to try and replicate things in the first place.

"The fact that all that requires a whole lot of work and is frustrating is a real indicator of where the cultural practices are now," said Brian Nosek, executive director of the Center for Open Science, which has pushed for replication. "Most scientists say replication is a cornerstone of how science works. ... It’s easy to say, but it’s hard to do."