Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The value is that even given this, they're much more reliable than preprints, and much much more reliable than sensational Twitter threads or public forums. They manage to filter out a tremendous amount of crap, but a sufficiently crafty and dishonest author can still sometimes slip past -- as is possible in any system.


Sure, but lots of high profile papers in top end journals like Nature fail to replicate: https://www.vox.com/science-and-health/2018/8/27/17761466/ps...

If we are to learn anything from the last 10 years of the replication crisis, it has to be that a single paper is a data point, not a conclusion.


Scientists, especially in biology (possibly psychology too but i don't know), do look at papers as data points and not conclusions. The "conclusion-minded" treatment of papers is a sad artefact of the publish-or-perish system (have to add a ton of spin to your results to make them publishable) and of pop science journalism.


This is neither new nor surprising.

There is a whole field of science dedicated to the statistical analysis of multiple studies in prior literature, e.g. just search for _Meta-analysis_ in the context of medical research.

A state-of-the-art Meta-Analysis is indeed the strongest level of scientific evidence you can have to advise a healthcare decisions or guidelines.

The only purpose of these journals with what they call a high 'impact factor', is that they typically have their peer reviews performed by people who are regarded as the _best_ in the their particular field. Nevertheless, to assume that an expert cannot be conned is also naive.

edit: typo


>If we are to learn anything from the last 10 years of the replication crisis, it has to be that a single paper is a data point, not a conclusion.

I highly recommend this elaboration upon that idea: https://slatestarcodex.com/2014/12/12/beware-the-man-of-one-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: