Can we use a Bayesian posterior distribution to calculate significances?
The term “significance” usually designates a p-value, which is a tail probability calculated from the distribution of a test statistic (i.e. a function of the data only) under many repetitions of the measurement and assuming that no signal is present. In contrast, a Bayesian posterior distribution is the distribution of a parameter. It is nevertheless possible to use a Bayesian posterior distribution to calculate quantities that can be interpreted as evidence against a given hypothesis. How to do this depends on the type of hypothesis one is testing. For example, if one is testing H0: theta<=theta0, then the posterior tail probability below theta0 is the Bayesian probability of H0. A low posterior hypothesis probability is evidence against that hypothesis. On the other hand, if one is testing H0: theta=theta0, then the probability of H0 is always zero if theta is a continuous parameter. A more sensible method consists in calculating a highest posterior density (HPD) interval for theta,