Running Pathfinder on Turing.jl models
This tutorial demonstrates how Turing can be used with Pathfinder.
We'll demonstrate with a regression example.
using AbstractMCMC, AdvancedHMC, DynamicPPL, FlexiChains, Pathfinder, Random, Turing
Random.seed!(39)
@model function regress(x)
α ~ Normal()
β ~ Normal()
σ ~ truncated(Normal(); lower=0)
μ = α .+ β .* x
y ~ product_distribution(Normal.(μ, σ))
end
x = 0:0.1:10
true_params = (; α=1.5, β=2, σ=2)
# simulate data
y = rand(regress(x) | true_params)[@varname(y)]
model = regress(x) | (; y)
n_chains = 8For convenience, pathfinder and multipathfinder can take Turing models as inputs and produce MCMCChains.Chains or FlexiChains.VNChain objects as outputs. To access this, we run Pathfinder normally; the chains representation of the draws (defaulting to Chains) is stored in draws_transformed.
result_single = pathfinder(model; ndraws=1_000)Single-path Pathfinder result
tries: 1
draws: 1000
fit iteration: 26 (total: 26)
fit ELBO: -213.33 ± 0.06
fit distribution: MvNormal{Float64, Pathfinder.WoodburyPDMat{Float64, LinearAlgebra.Diagonal{Float64, Vector{Float64}}, Matrix{Float64}, Matrix{Float64}, Pathfinder.WoodburyPDFactorization{Float64, LinearAlgebra.Diagonal{Float64, Vector{Float64}}, LinearAlgebra.QRCompactWYQ{Float64, Matrix{Float64}, Matrix{Float64}}, LinearAlgebra.UpperTriangular{Float64, Matrix{Float64}}}}, Vector{Float64}}(
dim: 3
μ: [1.6508971085470592, 1.9311753921149317, 0.5801261338911265]
Σ: [0.12833559665737235 -0.024925831081334386 -0.003747748546056793; -0.024925831081334372 0.007926736754357806 0.000559861078137877; -0.0037477485460567896 0.0005598610781378777 0.00621065939801297]
)
result_single.draws_transformedChains MCMC chain (1000×6×1 Array{Float64, 3}):
Iterations = 1:1:1000
Number of chains = 1
Samples per chain = 1000
parameters = α, β, σ
internals = logprior, loglikelihood, logjoint
Use `describe(chains)` for summary statistics and quantiles.
To request a different chain type (e.g. VNChain), we can specify the chain_type directly.
pathfinder(model; ndraws=1_000, chain_type=VNChain).draws_transformedFlexiChain (1000 iterations, 1 chain)
↓ iter=1:1000 | → chain=1:1
Parameter type VarName
Parameters α, β, σ
Extra keys :logprior, :loglikelihood, :logjoint
Note that while Turing's sample methods default to initializing parameters from the prior with InitFromPrior, Pathfinder defaults to uniformly sampling them in the range [-2, 2] in unconstrained space (equivalent to Turing's InitFromUniform(-2, 2)). To use Turing's default in Pathfinder, specify init_sampler=InitFromPrior().
result_multi = multipathfinder(model, 1_000; nruns=n_chains, init_sampler=InitFromPrior())Multi-path Pathfinder result
runs: 8
draws: 1000
Pareto shape diagnostic: 0.24 (good)The Pareto shape diagnostic indicates that it is likely safe to use these draws to compute posterior estimates.
chns_pf = result_multi.draws_transformed
describe(chns_pf)Chains MCMC chain (1000×6×1 Array{Float64, 3}):
Iterations = 1:1:1000
Number of chains = 1
Samples per chain = 1000
parameters = α, β, σ
internals = logprior, loglikelihood, logjoint
Summary Statistics
parameters mean std mcse ess_bulk ess_tail rhat ess_per_sec
Symbol Float64 Float64 Float64 Float64 Float64 Float64 Missing
α 1.6607 0.3475 0.0113 956.5274 959.1012 1.0010 missing
β 1.9308 0.0611 0.0019 1034.2238 849.1882 1.0002 missing
σ 1.8105 0.1258 0.0038 1094.3827 975.6785 1.0022 missing
Quantiles
parameters 2.5% 25.0% 50.0% 75.0% 97.5%
Symbol Float64 Float64 Float64 Float64 Float64
α 0.9588 1.4459 1.6765 1.8979 2.3181
β 1.8089 1.8904 1.9341 1.9729 2.0396
σ 1.5999 1.7236 1.7967 1.8956 2.0963We can also use these draws to initialize MCMC sampling with InitFromParams.
params = AbstractMCMC.to_samples(DynamicPPL.ParamsWithStats, chns_pf[1:n_chains, :, :], model)
initial_params = [InitFromParams(p.params) for p in vec(params)]chns = sample(model, Turing.NUTS(), MCMCThreads(), 1_000, n_chains; initial_params, progress=false)
describe(chns)┌ Warning: Only a single thread available: MCMC chains are not sampled in parallel
└ @ AbstractMCMC ~/.julia/packages/AbstractMCMC/C1aKp/src/sample.jl:544
┌ Info: Found initial step size
└ ϵ = 0.025
┌ Info: Found initial step size
└ ϵ = 0.05
┌ Info: Found initial step size
└ ϵ = 0.05
┌ Info: Found initial step size
└ ϵ = 0.05
┌ Info: Found initial step size
└ ϵ = 0.05
┌ Info: Found initial step size
└ ϵ = 0.05
┌ Info: Found initial step size
└ ϵ = 0.2
┌ Info: Found initial step size
└ ϵ = 0.05
Chains MCMC chain (1000×17×8 Array{Float64, 3}):
Iterations = 501:1:1500
Number of chains = 8
Samples per chain = 1000
Wall duration = 4.36 seconds
Compute duration = 2.91 seconds
parameters = α, β, σ
internals = n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size, logprior, loglikelihood, logjoint
Summary Statistics
parameters mean std mcse ess_bulk ess_tail rhat ess_per_sec
Symbol Float64 Float64 Float64 Float64 Float64 Float64 Float64
α 1.6337 0.3380 0.0053 4066.3455 4375.1240 1.0018 1398.3306
β 1.9333 0.0585 0.0009 4078.0293 4357.4593 1.0032 1402.3485
σ 1.8139 0.1256 0.0018 4862.9435 4499.6669 1.0013 1672.2639
Quantiles
parameters 2.5% 25.0% 50.0% 75.0% 97.5%
Symbol Float64 Float64 Float64 Float64 Float64
α 0.9499 1.4102 1.6362 1.8596 2.2992
β 1.8210 1.8935 1.9333 1.9719 2.0487
σ 1.5820 1.7279 1.8076 1.8949 2.0826We can use Pathfinder's estimate of the metric and only perform enough warm-up to tune the step size.
inv_metric = result_multi.pathfinder_results[1].fit_distribution.Σ
metric = Pathfinder.RankUpdateEuclideanMetric(inv_metric)
kernel = HMCKernel(Trajectory{MultinomialTS}(Leapfrog(0.0), GeneralisedNoUTurn()))
adaptor = StepSizeAdaptor(0.8, 1.0) # adapt only the step size
nuts = AdvancedHMC.HMCSampler(kernel, metric, adaptor)
n_adapts = 50
n_draws = 1_000
chns = sample(
model,
externalsampler(nuts),
MCMCThreads(),
n_draws + n_adapts,
n_chains;
n_adapts,
initial_params,
progress=false,
)[n_adapts + 1:end, :, :] # drop warm-up draws
describe(chns)┌ Warning: Only a single thread available: MCMC chains are not sampled in parallel
└ @ AbstractMCMC ~/.julia/packages/AbstractMCMC/C1aKp/src/sample.jl:544
[ Info: Found initial step size 1.6
[ Info: Found initial step size 0.8500000000000001
[ Info: Found initial step size 0.8500000000000001
[ Info: Found initial step size 1.5875000000000001
[ Info: Found initial step size 0.8
[ Info: Found initial step size 0.8500000000000001
[ Info: Found initial step size 0.8
[ Info: Found initial step size 0.8
Chains MCMC chain (1000×17×8 Array{Float64, 3}):
Iterations = 51:1:1050
Number of chains = 8
Samples per chain = 1000
Wall duration = 1.65 seconds
Compute duration = 1.48 seconds
parameters = α, β, σ
internals = n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size, logprior, loglikelihood, logjoint
Summary Statistics
parameters mean std mcse ess_bulk ess_tail rhat ess_per_sec
Symbol Float64 Float64 Float64 Float64 Float64 Float64 Float64
α 1.6425 0.3367 0.0048 4825.3650 5368.6923 1.0017 3249.4040
β 1.9321 0.0590 0.0010 3719.0372 3770.6644 1.0026 2504.4022
σ 1.8145 0.1288 0.0021 3934.3622 3788.2042 1.0018 2649.4022
Quantiles
parameters 2.5% 25.0% 50.0% 75.0% 97.5%
Symbol Float64 Float64 Float64 Float64 Float64
α 0.9708 1.4216 1.6487 1.8658 2.2983
β 1.8190 1.8925 1.9312 1.9711 2.0501
σ 1.5832 1.7248 1.8048 1.8962 2.0895See Initializing HMC with Pathfinder for further examples.