Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But isn’t this how science works? We discover a new way to measure something, and then we start evaluating its usefulness as a measure. When and where we find interesting relationships, report that out, and then start looking for causes. Perhaps the new ruler is broken, or perhaps we learn something we didn’t know before. Regardless, I see reports such as this one as interim findings necessary to either justify or pursue additional funding.

VC and SV entrepreneurship is pretty much the same game. That’s why you have the “Uber for X” syndrome. Sometimes it sticks, sometimes it doesn’t. We are all human search functions reporting back to the hive mind.



It’s not exactly how science is supposed to work. New measurement technology should result more from a pull than a push. That is, the scientists studying mental diseases probably want a better/more specific way to measure brain activity, because most of their mechanistic hypotheses revolve around brain function. In this case, they’ve been told that there’s a new way to measure gut microbe populations, and the cool part is the technique produces so much data from a single sample. Somethings gotta correlate!

Now I don’t actually think the gut microbiome is just an automatic p-hacker - most studies are measuring pretty standard aggregate measurements of “diversity”, not just finding one class of the billions of bacteria that happens to correlate, so I’d agree with the OP that the question is more one of causality. And since a fecal transplant from an ill mouse caused signs of illness in an otherwise healthy mouse, then there’s reason to follow up.


I think it's a chicken-egg problem, so we are likely to not find agreement.

Sure, sometimes you have a hypothesis and you go searching for a way to measure something.

Other times, you find a new way of looking at things, or you come across some new fascinating data. This helps you generate hypotheses for which you need to design new studies, not least of which is to ensure that the new measuring tool is measuring what you think it is.

I definitely agree with the sentiment that the product of research such as this shouldn't be policy or a change in world view. It should be hypothesis generation and new testing.


that is the opposite of how science is supposed to work. Using the method described you will statistically always find false correlations in data sets. This is because you can find patterns in any set of data.

You have to start with a hypothesis and then come up with the measures and expected outcomes based on it. Then you design the experiment to determine if the hypothesis is false.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: