We can solve this at the grant level. Stipulate that for every new paper a group publishes from a grant, that group must also publish a replication of an existing finding. Publication would happen in pairs, so that every novel thing would be matched with a replication.
Replications could be matched with grants: if you receive $100,000 grant, you'd get the $100,000 you need, plus another $100,000 which you could use to publish a replication of a previous $100,000 grant. Researchers can choose which findings they replicate, but with restrictions, e.g. you can't just choose your group's previous thing.
I think if we did this, researchers would naturally be incentivized to publish experiments that are easier to replicate and of course fraud like this would be caught eventually.
I bet we could throw away half of publications tomorrow and see no effect on the actual pace of progress in science.
Replication is over-emphasised. Attempts to organise mass replications have struggled with basic problems like papers making numerous claims (which one do you replicate?), the question of whether you try to replicate the original methodology exactly or whether you try to answer the same question as the original paper (matters in cases where the methodology was bad), many papers making obvious low value findings (e.g. poor children do worse at school) and so on.
But the biggest problem is actually that large swathes of 'scientists' don't do experiments at all. You can't even replicate such papers because they exist purely in the realm of the theoretical. The theory often isn't even properly written down! They will tell you that the paper is just a summary of the real model, which is (at best) found in a giant pile of C or R on some github repo that contains a single commit. Try to replicate their model from the paper, there isn't enough detail to do so. Try to replicate from the code, all you're doing is pointlessly rewriting code that already exists (proves nothing). Try to re-derive their methodology from the original question and if you can't, they'll just reject your paper as illegitimate criticism and say it wasn't a real replication.
Having reviewed quite a lot of scientific papers in the past six years or so, the ones that were really problematic couldn't have been fixed with incentivized replication.
So then, how on earth does this stuff even get published? What exactly is it that we're all doing here?
If a finding either cannot be communicated enough for someone else to replicate it, or cannot be replicated because the method is shoddy, can we even call that science?
At some level I know that what I'm proposing isn't realistic because the majority of science is sloppy. P-hacking, lack of detail, bad writing, bad methods, code that doesn't compile, fraud. But maybe if we tried some version of this, it would cause a course correction. Reviewers, knowing that someone actually would attempt to replicate a paper at some point down the road, would be far more critical of ambiguity and lack of detail.
Papers that are not fit to be replicated in the future, whose claims cannot be tested independently, are actually not science at all. They are worth less than nothing because they take up air in the room, choking out actual progress.
That correct. Fundamentally the problem is foundations and government science budgets don't care. As long as voters or Bill Gates or whoever believes they're funding science and progress the money flows like water. There's no way to fix it short of voting in a government that totally defunds the science budget. Until then everyone benefits from unscientific behaviour.
The amazing thing is that it all works out in the end and science is still making (quite a lot of) progress.
That's also the reason why we shouldn't spend all of our time and money checking and replicating things just to make sure noone publishes fraudulent/shoddy results. (We should probably spend a little more time and money on that, but not as much more as some people here seem to suggest).
Most research is in retrospect useless nonsense. It's just impossible to tell in advance. There is no point in checking and replicating all of it. Results that are useful or important will be checked and replicated eventually. If they turn out to be wrong (which is still quite rare), a lot of effort is wasted. However, again, that's rare.
If the fraud/quality issues get worse (different from "featuring more frequently and prominently in the news"), eventually additional checks start to make sense and be worth it overall. I think quite a lot of progress is happening here already, with open data, code, pre-registration of studies, better statistical methods, etc, becoming more common.
I think a major issue is the idea that "papers are the incontestable scientific truth". Some people seem to think that's the goal, or that it used to be the case and fraud is changing that now, however, this was never the case and it's not at all the point of publishing research. I think a major gain would be to separate in the public perception the concepts, understanding and reputations of science vs. scientific publishing.
There would still be incentives for collusion (I "reproduce" your research, you "reproduce" mine), and researchers pretending to reproduce papers but actually not bothering (especially if they believe that the original research was done properly).
Ultimately, I'm not sure how to incentivize reproduction of research: it's very easy to fake a successful reproduction (you already know the results, and the original researcher will not challenge you), so you don't want to reward that too much. Whereas incentivizing failed reproductions might lead some scientists to sabotage their own reproduction efforts in ways that are subtle enough to have plausible deniability.
Proceeding by pairs is probably not enough. You probably need 5-6 replications per paper to make sure that at least one attempt is honest and competent, and make the others afraid to do the wrong thing and stand out.
You could randomize replications a bit, take away the choice. Or make it so that if you replicated one group's result, you can't replicate them again next time. The key is a bit of distance, a bit of neutrality. Enough jitter to break up cliques.
I don't work in academia but in my experience professors are basically all intellectually arrogant and ego-driven, and would relish having time and space to beat each other at the brain game. A failed replication is their chance to be "the smarter guy in the room" and crack open some long-held belief. A successful replication would probably happen most of the time and be far more boring.
I could imagine, if such a thing were mandated and in place for a while, one could build her career on replications, as a prosecutor or defense. She would publish new research solely to convince her colleagues that she is sharp enough to play prosecutor or defense.
Anything has got to be better than what we have now, where apparently you can cheat and defraud your way through an entire decades-spanning career.
The tricky thing with randomizing is that science gets very specialized, both with equipment required and knowledge. So there may only be a handful of people whose work you can competently replicate.
And those same people are reviewing the papers you publish and will not hesitate to sabotage your career if you have made them look bad by failing to replicate their papers.
If you publish a paper with fraudulent data, methods, or results, and you received any state or federal funds for it, there should be prison time. You stole taxpayer money.
I'm not saying for when people are wrong, I'm saying for when you can prove someone knowingly lied. It won't catch anyone, and you need to bar to be high enough that people don't go to jail for being bad scientists, but right now there is zero social, professional, or legal risk is just lying your ass off to get the next grant and keep the spice flowing.
Nobody's going to do that when changing the numbers in your Excel sheet carries a risk of a decade or two in a minimum security prison.
I think it would be better to have separate grants for replication studies. If something becomes a mandatory administrative burden, people will see it as low-prestige work and try to avoid it. And the kind of people who are good at novel research are often also good at ignoring duties they don't like, or completing them with a minimal effort if forced to.
But if there is separate funding for replication studies, it will become something people compete for. Some people will specialize on replicating others' work, and universities will pay attention, as they care about grant overheads.
> But if there is separate funding for replication studies, it will become something people compete for.
It would need to be very good funding on par with what's offered for "novel research".
In addition, we would need increased prestige (e.g. awards, citations) for replicated studies as well for this to be effective. For many academics funding is merely a means to that end.
Another reason for doing this is that if the people doing replication also do original research then calling out someone’s work as bad incentivizes them to sabotage your work when they inevitably review your papers.
You can avoid that to some extent by having replication and original work be separate specialities - and making sure that replication gets prestige so good people do it.
> I bet we could throw away half of publications tomorrow and see no effect on the actual pace of progress in science.
It might actually improve the pace of science, if the half eliminated were not replicable and the remaining half were written by researchers knowing that they would likely face a replication attempt.
It is a lot easier to just falsely prove the experiment since the data is already there and the publisher of the paper is not going to push back if you confirm it.
Why go through all the work of actually proving/disproving the experiment when you can just change tweak the numbers of the original experiment, say you actually reproduced the experiment, and then move on?
Would this not incentivise the forming of groups that replicate each others work. If you're already committing wilful fraud on your own papers, why wouldn't you commit a bit more for another researcher willing to do the same for you? With >2 parties, it won't be immediately obvious that this trading has occured.
Replications could be matched with grants: if you receive $100,000 grant, you'd get the $100,000 you need, plus another $100,000 which you could use to publish a replication of a previous $100,000 grant. Researchers can choose which findings they replicate, but with restrictions, e.g. you can't just choose your group's previous thing.
I think if we did this, researchers would naturally be incentivized to publish experiments that are easier to replicate and of course fraud like this would be caught eventually.
I bet we could throw away half of publications tomorrow and see no effect on the actual pace of progress in science.