Running Pathfinder on Turing.jl models
This tutorial demonstrates how Turing can be used with Pathfinder.
We'll demonstrate with a regression example.
using AbstractMCMC, AdvancedHMC, DynamicPPL, FlexiChains, Pathfinder, Random, Turing
Random.seed!(39)
@model function regress(x)
α ~ Normal()
β ~ Normal()
σ ~ truncated(Normal(); lower=0)
μ = α .+ β .* x
y ~ product_distribution(Normal.(μ, σ))
end
x = 0:0.1:10
true_params = (; α=1.5, β=2, σ=2)
# simulate data
(; y) = rand(regress(x) | true_params)
model = regress(x) | (; y)
n_chains = 8For convenience, pathfinder and multipathfinder can take Turing models as inputs and produce MCMCChains.Chains or FlexiChains.VNChain objects as outputs. To access this, we run Pathfinder normally; the chains representation of the draws (defaulting to Chains) is stored in draws_transformed.
result_single = pathfinder(model; ndraws=1_000)Single-path Pathfinder result
tries: 1
draws: 1000
fit iteration: 14 (total: 16)
fit ELBO: -213.66 ± 0.09
fit distribution: MvNormal{Float64, Pathfinder.WoodburyPDMat{Float64, LinearAlgebra.Diagonal{Float64, Vector{Float64}}, Matrix{Float64}, Matrix{Float64}, Pathfinder.WoodburyPDFactorization{Float64, LinearAlgebra.Diagonal{Float64, Vector{Float64}}, LinearAlgebra.QRCompactWYQ{Float64, Matrix{Float64}, Matrix{Float64}}, LinearAlgebra.UpperTriangular{Float64, Matrix{Float64}}}}, Vector{Float64}}(
dim: 3
μ: [1.6508971085750253, 1.9311753921107158, 0.5801261338729266]
Σ: [0.11087798427694032 -0.016527256928018597 -0.0014208078651489035; -0.016527256928018624 0.0034071472308229607 0.00019433068465847872; -0.0014208078651489287 0.00019433068465846744 0.004718299969699233]
)
result_single.draws_transformedChains MCMC chain (1000×6×1 Array{Float64, 3}):
Iterations = 1:1:1000
Number of chains = 1
Samples per chain = 1000
parameters = α, β, σ
internals = logprior, loglikelihood, logjoint
Use `describe(chains)` for summary statistics and quantiles.
To request a different chain type (e.g. VNChain), we can specify the chain_type directly.
pathfinder(model; ndraws=1_000, chain_type=VNChain).draws_transformedFlexiChain (1000 iterations, 1 chain)
↓ iter=1:1000 | → chain=1:1
Parameter type VarName
Parameters α, β, σ
Extra keys :logprior, :loglikelihood, :logjoint
Note that while Turing's sample methods default to initializing parameters from the prior with InitFromPrior, Pathfinder defaults to uniformly sampling them in the range [-2, 2] in unconstrained space (equivalent to Turing's InitFromUniform(-2, 2)). To use Turing's default in Pathfinder, specify init_sampler=InitFromPrior().
result_multi = multipathfinder(model, 1_000; nruns=n_chains, init_sampler=InitFromPrior())Multi-path Pathfinder result
runs: 8
draws: 1000
Pareto shape diagnostic: 0.57 (ok)The Pareto shape diagnostic indicates that it is likely safe to use these draws to compute posterior estimates.
chns_pf = result_multi.draws_transformed
describe(chns_pf)Chains MCMC chain (1000×6×1 Array{Float64, 3}):
Iterations = 1:1:1000
Number of chains = 1
Samples per chain = 1000
parameters = α, β, σ
internals = logprior, loglikelihood, logjoint
Summary Statistics
parameters mean std mcse ess_bulk ess_tail rhat ess_per_sec
Symbol Float64 Float64 Float64 Float64 Float64 Float64 Missing
α 1.6431 0.3356 0.0108 954.2866 1023.6250 1.0002 missing
β 1.9310 0.0587 0.0018 1053.3388 845.1344 0.9990 missing
σ 1.8216 0.1289 0.0042 931.3650 804.3368 1.0001 missing
Quantiles
parameters 2.5% 25.0% 50.0% 75.0% 97.5%
Symbol Float64 Float64 Float64 Float64 Float64
α 0.9977 1.4224 1.6388 1.8658 2.3097
β 1.8241 1.8886 1.9330 1.9678 2.0429
σ 1.5913 1.7300 1.8096 1.8964 2.0972We can also use these draws to initialize MCMC sampling with InitFromParams.
params = AbstractMCMC.to_samples(DynamicPPL.ParamsWithStats, chns_pf[1:n_chains, :, :])
initial_params = [InitFromParams(p.params) for p in vec(params)]chns = sample(model, Turing.NUTS(), MCMCThreads(), 1_000, n_chains; initial_params, progress=false)
describe(chns)┌ Warning: Only a single thread available: MCMC chains are not sampled in parallel
└ @ AbstractMCMC ~/.julia/packages/AbstractMCMC/oqm6Y/src/sample.jl:544
┌ Info: Found initial step size
└ ϵ = 0.2
┌ Info: Found initial step size
└ ϵ = 0.05
┌ Info: Found initial step size
└ ϵ = 0.2
┌ Info: Found initial step size
└ ϵ = 0.025
┌ Info: Found initial step size
└ ϵ = 0.2
┌ Info: Found initial step size
└ ϵ = 0.025
┌ Info: Found initial step size
└ ϵ = 0.046875
┌ Info: Found initial step size
└ ϵ = 0.046875
Chains MCMC chain (1000×17×8 Array{Float64, 3}):
Iterations = 501:1:1500
Number of chains = 8
Samples per chain = 1000
Wall duration = 5.28 seconds
Compute duration = 3.8 seconds
parameters = α, β, σ
internals = n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size, logprior, loglikelihood, logjoint
Summary Statistics
parameters mean std mcse ess_bulk ess_tail rhat ess_per_sec
Symbol Float64 Float64 Float64 Float64 Float64 Float64 Float64
α 1.6455 0.3393 0.0058 3386.1924 4204.2384 1.0013 892.2773
β 1.9317 0.0595 0.0010 3435.4787 3999.7628 1.0014 905.2645
σ 1.8160 0.1286 0.0018 4904.3235 4736.9706 1.0000 1292.3119
Quantiles
parameters 2.5% 25.0% 50.0% 75.0% 97.5%
Symbol Float64 Float64 Float64 Float64 Float64
α 0.9754 1.4161 1.6501 1.8703 2.3164
β 1.8154 1.8926 1.9316 1.9717 2.0477
σ 1.5875 1.7257 1.8068 1.8991 2.0920We can use Pathfinder's estimate of the metric and only perform enough warm-up to tune the step size.
inv_metric = result_multi.pathfinder_results[1].fit_distribution.Σ
metric = Pathfinder.RankUpdateEuclideanMetric(inv_metric)
kernel = HMCKernel(Trajectory{MultinomialTS}(Leapfrog(0.0), GeneralisedNoUTurn()))
adaptor = StepSizeAdaptor(0.8, 1.0) # adapt only the step size
nuts = AdvancedHMC.HMCSampler(kernel, metric, adaptor)
n_adapts = 50
n_draws = 1_000
chns = sample(
model,
externalsampler(nuts),
MCMCThreads(),
n_draws + n_adapts,
n_chains;
n_adapts,
initial_params,
progress=false,
)[n_adapts + 1:end, :, :] # drop warm-up draws
describe(chns)┌ Warning: Only a single thread available: MCMC chains are not sampled in parallel
└ @ AbstractMCMC ~/.julia/packages/AbstractMCMC/oqm6Y/src/sample.jl:544
[ Info: Found initial step size 1.6
[ Info: Found initial step size 3.2
[ Info: Found initial step size 1.6
[ Info: Found initial step size 3.2
[ Info: Found initial step size 1.6
[ Info: Found initial step size 1.7000000000000002
[ Info: Found initial step size 1.6125
[ Info: Found initial step size 1.6
Chains MCMC chain (1000×17×8 Array{Float64, 3}):
Iterations = 51:1:1050
Number of chains = 8
Samples per chain = 1000
Wall duration = 1.64 seconds
Compute duration = 1.45 seconds
parameters = α, β, σ
internals = n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size, logprior, loglikelihood, logjoint
Summary Statistics
parameters mean std mcse ess_bulk ess_tail rhat ess_per_sec
Symbol Float64 Float64 Float64 Float64 Float64 Float64 Float64
α 1.6439 0.3393 0.0034 10272.6852 6279.7259 1.0018 7084.6104
β 1.9317 0.0591 0.0006 10322.0121 6023.3676 1.0018 7118.6290
σ 1.8150 0.1258 0.0012 10411.2790 6212.8585 1.0013 7180.1924
Quantiles
parameters 2.5% 25.0% 50.0% 75.0% 97.5%
Symbol Float64 Float64 Float64 Float64 Float64
α 0.9618 1.4235 1.6464 1.8672 2.3049
β 1.8163 1.8914 1.9310 1.9707 2.0494
σ 1.5880 1.7259 1.8084 1.8946 2.0777See Initializing HMC with Pathfinder for further examples.