A paper was released recently:
This describes an approach to doing Monte Carlo on big data sets that requires little communication between machines. This is important if you want a problem to scale. Sounds cool, but also not new, and evidently not as universal as it seems.
Hacker News has a thread on this at:
which had some posts by what appear to be experts in the field. What I found unique about this thread was that the snark was reigned in really quickly and the comments were useful. I’m not an expert by any means. What I see is some validity to the complaints, but not enough attention paid to practical versus theoretical; perhaps what these people discovered wasn’t breaking ground in a theoretical sense, but can it be used in a practical fashion?
I found the paper more readable than many other papers of its ilk.
There’s another complaint levied against the paper, and that is that our scale of data may make Bayesian inference computationally intractable:
It’s not really a complaint against the paper as much as a fear that the technique may become irrelevant in the near future.