Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

In my experience at the university performing graduate research -

* Newness is prized

* Replicating old science is not prized

* Funding and papers happens for breakthroughs

In software engineering -

* many bugs in code creep through exacting peer reviews.

Recapping the short list: new things make you money, reviewing things is error prone, replicating old things doesn't make you money until someone wants to rely on them. Hmmmm. The disincentives to replicate unapplied research speak for themselves.

What then is to be done beyond hand-wringing and moaning?

Several things have to happen: First, grants need to be given for replication of research- replication needs to be an thing that is frequently done. Second, papers (dis)proving prior results (or disproving, period) need to be a recognized category in journals. I do not mean that disproving someone else's pet theory in favor of yours; I mean disproving the result, period; regardless of whether it helps advance your particular line of work.

From someone who's currently in industry (and is looking towards going back for the PhD), I encourage the academics to open up and/or push the area of "negative results" as a recognized category of paper. If you have graduate students that can't reproduce prior work - please have them publish that!

There have been occasional comments about Journals of Negative Results; maybe those could come to fruition sometime. :-)



As an experimental physicist, I want to point out that negative results do get published -- for example, much of the experimental work on gravity amounts to the showing of no measurable difference from theoretical expectation, to a high degree of accuracy [1] -- but writing the papers for them is much harder than writing up positive results. The reason is simple: you have to convincingly show that your failure to show a result is not just due to your mistakes.

Here is an analogy to a trivial situation. Which is more convincing?

Positive result: By following the steps in the documentation, I installed MS Word on my computer. Therefore, MS Word can be installed on my computer.

Negative result: I followed the steps in the documentation, but MS Word still doesn't work on my computer. Therefore, MS Word cannot be installed on my computer.

If you're like me, you barely pause after reading the positive result, but the negative one brings to mind piles of questions: did you have the right version for your OS? Do you have enough disk space? Does your computer work properly in other respects? A really tricky problem could take ages to figure out. At some point it's probably going to seem wiser just to abandon the problem, and get a different computer or a different program.

So, as a scientist, when you're faced with a negative result, you know you have a battle ahead of you. If you don't think anyone will care very much about your result, moving on to the next project may seem to be the right choice. It's a tough situation, and I sympathize with anyone facing it. And is it the right choice for science as a whole? Sometimes it isn't, but sometimes it is.

[1] For one such example, check out the Eotvos experiment and its many descendants. http://en.wikipedia.org/wiki/Eötvös_experiment


Nobody has yet mentioned a very well known negative result... the Michelson-Morley experiment, a complicated and time consuming endeavour.


Even positive results aren't always convincing when they go against the scientific mainstream. For example, consider the Avery/Macleod experiment that showed genes and chromosomes (and thus the heredity) were made of DNA- positive result, easy to analyze the methods and results- and frequently ignored.

It wasn't until the far simpler and more unambiguous experiment- that even physicists could understand- Hershey/Chase was done. But oddly, it was a negative result when it was published: "protein is not the heredity molecule" although by reasonable exclusion, one could easily conclude that DNA was almost certainly the heredity molecule.

Even then, a lot of other evidence was required for the mainstream community to accept this. Protein bias is long and exists today.


Exactly this. Tons of people would love to publish negative results. But when you're in the situation with something not working, the actual mechanics of proving it doesn't work is a much harder problem.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: