Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Google “the null hypothesis”.

> If you mean null model, then I’m not fighting against anyone. We all agree which null model to use is a choice to be made.

I don’t understand what difference you are trying to imply by drawing a distinction between a null model and a null hypothesis.

> Otherwise, I’m not even sure what you’re trying to convince me of at this point. I’ll restate the essence of my first comment more concisely.

I will try to make it as clear as possible.

> Bayes factors are a method of model comparison.

Are you implying that hypothesis testing isn’t? That’s just false. And I’ve explained why.

> You take the ratio of marginal likelihoods for two models given the data. Choosing a null model for this purpose requires more assumptions than doing null hypothesis testing with frequentist statistics.

And in frequentist statistics you just calculate likehood because you can’t integrate over your model probabilities to get marginal likehood because you don’t assume your models to have a probability of being true. That’s the only extra assumption you have in Bayesian statistics. Everything else is the same. If you are saying that there are some other extra assumptions, that’s just false as I’ve explained in my previous comments. There are no extra assumptions for a “null model” beyond putting a prior on it.

> Mixing the schools of thought of Bayesian and frequentist makes things more confusing than operating within them individually. Bayes factors have other uses than null hypothesis testing.

There is no any confusing “mixing”. It’s just statistical decision theory. In the frequentists approach you calculate the risk of your decision rule for each model and call it a day. In the Bayesian approach you go one step further and average your risks using your priors to get the “total” Bayes risk.

Both approaches have uses other than null hypothesis testing. Null hypothesis testing is just a particular case of a decision problem with a 0-1 loss function. The loss is 0 if you have chosen the correct hypothesis and it is 1 if you have encountered type I or type II error.



> > Bayes factors are a method of model comparison.

> Are you implying that hypothesis testing isn’t?

No.


Then I don't get the meaning of this:

> Bayes factors work with comparing models. There is no null model. What, 0% effect? Ok, there was a non-zero effect. That model loses since it put the probability of 0% at 1 and everything else at 0. And if you do anything else, you’re encoding some amount of belief into the model, some judgment you’ve made.

> So, you need to pick two models and compare them. I’m not saying this is right for science. It’s working well for my purposes. One model meaning “as planned”, one model meaning “not as planned”, use the Bayes factor to decide if things are going as planned. But you do need to be explicit about what models you’re comparing. You have to be able to just put some data in and get a probability back, or it’s not going to work.

It is the same way with traditional hypothesis testing. You take two models and compare their likehood.


> It is the same way with traditional hypothesis testing. You take two models and compare their likehood.

With a Bayes factor you compare the marginal likelihood. You have to account for the weight of the parameters according to the priors. With a likelihood ratio, you pick the best parameters and take the ratio of those likelihoods.

This means a model used in a Bayes factor must be able to make predictions that follow probability axioms. Models in likelihood ratios don’t have this restriction.

I agree likelihood ratios and Bayes factors are similar. They’re also different.


> With a Bayes factor you compare the marginal likelihood. You have to account for the weight of the parameters according to the priors. With a likelihood ratio, you pick the best parameters and take the ratio of those likelihoods.

Yeah, that's the difference that I mentioned. And seems very different from whatever "it put the probability of 0% at 1 and everything else at 0" is supposed to refer to.

> This means a model used in a Bayes factor must be able to make predictions that follow probability axioms. Models in likelihood ratios don’t have this restriction.

Models in likehood ratios absolutely have to follow probability axioms, otherwise it would make no sense to apply probability axioms to study them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: