Turing.jl vs Stan in Julia

I'm having a few days off so I want to spend the time learning Julia. I'm working mostly in R and Python, and my work involved a lot of Bayesian stuffs which I usually use either Stan or PyMC3 for. Recently I've been playing around with Julia, and I've heard a lot of praises for Turing.jl for probabilistic programming. I wonder if anyone here ever used it? What are some pros and cons against, say, Stan or PyMC3?

Turing.jl dev here. There are various differences between Turing, Stan and PyMC. Most importantly, Turing is only adding a thin layer to your Julia code, and you can, therefore, use any Julia code or library that exists in your models. This means you can easily use NN, the GPU, ODEs and whatever you like. By doing so, Turing is also more easily hackable. Another big difference is that Turing allows you to combine samplers and provides inference algorithms for continuous and discrete RVs. For the continuous case, we have an excellent implementation of Stans HMC and other variants, provided in a library that also can be used independently. The support for discrete RVs is mostly using particle Gibbs, but we are working on a JAGS style sampler atm. That said, Turing is much younger than the other PPLs. We probably lack specific tools, or you might run in some corner cases we didn’t think of, but we are working hard on getting bugs fixed ASAP and respond to questions quickly.


ログイン ユーザー登録
ようこそ ゲスト さん