1. 程式人生 > >Marginally Interesting: A Bit of Background on "Bayes vs. Frequentists"

Marginally Interesting: A Bit of Background on "Bayes vs. Frequentists"

I have to admit that I always found the Bayesians vs. Frequentist divide quite silly, at least from a pragmatist point of view. Instead of taking an alternative viewpoint as an inspiration of rethinking one’s own approach, it seems like modern statistics is more or less stuck, probably with the Bayesians being the more stubborn of the two. At least in my own experience, Bayesians have been more likely to discredit the Frequentist approach totally than the other way round.

Recently I came across two interesting papers that discuss aspects of the history of modern statistics and which I found pretty interesting for understanding how we arrived in the current situation. The first one is Sandy Zabell’s , and the other is by Stephen Fienberg.

I have always silently assumed that the Bayesian and frequentist point of view have somehow be around from the very beginning, and neither of the two approaches has managed to become the predominant one. I found it quite plausible that such things happen, probably because the unifying “true” point of view hasn’t been discovered yet.

However, it turns out that actually the Bayesian point of view has been the predominant one till the 1920s. In fact, the first ideas on statistical inference attributed to Laplace and Bayes have been Bayesian in nature. The term “Bayesian” didn’t even exist back then, instead the application of Bayes theory was called “inversion of probabilities” in analogy to the inversion of a function.

Then, at the beginning of the 20th century, the frequentist point of view emerged, in particular through the works of R. A. Fisher, Neymann, Pearson, and others who laid the foundation of modern classical statistics. The main point of critique was that the requirement of Bayesian inference to specify a prior distribution introduces a subjective element, meaning that the inference depends not only on the data but also on your own assumptions. In Fisher’s 1922 paper , Fisher proposes alternative criteria for sound statistical inference like unbiasedness, consistency and efficiency.

These works had tremendous impact back then, pushing Bayesian inference in the background as everyone jumped the new frequentist bandwagon. Only in the 1950s did the Bayesian approach reemerge and slowly attract researchers again.

In summary, my understanding that both approaches had always been equal rivals has been wrong. Instead, the Bayesian approach was almost eclipsed by the frequentist’s until it could recover only 50-60 years ago. This means that your professor might have studied with someone who can still remember those days when Bayesian inference tried to make a comeback and get its share of the cake.

Or put differently, the reason that frequentists usually don’t care by which label they go by might be that, historically, frequentists just conquered a whole field when they arrived, while Bayesians are still under the somewhat traumatic impression of being eclipsed by an alternative approach.

With which I’m not implying that this is still the case. On the contrary, I think Bayesian and frequentist approaches have long entered the state of being on equal footing, at least in the area of machine learning. Therefore, I think it is probably time that both sides start to realize that they actually have more in common than it seems. That is something I’d like to cover in another post.

Posted by Mikio L. Braun at 2010-06-01 16:21:00 +0200

blog comments powered by Disqus