2  Introduction: Credibility, Models, and Parameters

2.1 Bayesian inference is reallocation of credibility across possibilities

To make Figure 2.1 we need data. We will create some synthetic data and store it in using DataFrames.jl

using DataFrames

function expand_grid(; iters...)
    var_names = collect(keys(iters))
    var_itr = [1:length(x) for x in iters.data]
    var_ix = vcat([collect(x)' for x in Iterators.product(var_itr...)]...)
    out = DataFrame()
    for i = 1:length(var_names)
        out[:,var_names[i]] = collect(iters[i])[var_ix[:,i]]
    end
    return out
end

d = expand_grid(iteration=1:3, Possibilities=["A", "B","C", "D"], stage = ["Prior", "Posterior"])

d2 =DataFrame(Credibility =[fill(.25,4); 0; fill(1/3,3); 0; fill(1/3,3);0;.5;0;0.5;0;.5;0;0.5;fill(0,3);1])

sort!(d, [:iteration])
d.Credibility = d2.Credibility

Here we are defining a function that creates combinations and then I just bind that we the Credibility values we will use for plotting.

We can take a look at the top few rows of the data with the first() function.

first(d,5)

5 rows × 4 columns

iterationPossibilitiesstageCredibility
Int64StringStringFloat64
11APrior0.25
21BPrior0.25
31CPrior0.25
41DPrior0.25
51APosterior0.0

Now lets plot our version of Figure 2.1. To do so, we will use the Gadfly.jl package. This is my preferred plotting package because it has similar syntax to ggplot2 package in R.

Possibilities 3 2 1 A B C D A B C D A B C D -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 -1 0 1 2 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 Credibility Posterior Possibilities 3 2 1 A B C D A B C D A B C D -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 -0.5 0.0 0.5 1.0 -0.50 -0.48 -0.46 -0.44 -0.42 -0.40 -0.38 -0.36 -0.34 -0.32 -0.30 -0.28 -0.26 -0.24 -0.22 -0.20 -0.18 -0.16 -0.14 -0.12 -0.10 -0.08 -0.06 -0.04 -0.02 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 0.22 0.24 0.26 0.28 0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44 0.46 0.48 0.50 0.52 0.54 0.56 0.58 0.60 0.62 0.64 0.66 0.68 0.70 0.72 0.74 0.76 0.78 0.80 0.82 0.84 0.86 0.88 0.90 0.92 0.94 0.96 0.98 1.00 Credibility Prior

maybe add the annotation using something like below https://stackoverflow.com/questions/50925450/gadfly-julia-how-to-add-text-annotation-to-a-plot

2.1.1 Data are noisy and inferences are probabilistic.

The next plot will involve getting creating some Gaussian distributions. To do so, we will use the Distributions.jl package. Distribution allows create and extract information from a wide range of probability distributions. We will use this package a lot in subsequent chapter to draw samples to facilitate Bayesian inference. Note that I am ignoring the scaling issue apparent from the four distributions densities not being equal to one. Also to create the illusion of a discrete x axis, I will just show only the 4 values.

TODO: rescale the curves such that the density on the y axis for each curve sums to one.

using Gadfly
using Distributions
using DataFrames

set_default_plot_size(16cm, 18cm)

ticks = [1, 2, 3, 4] #specify the tick marks

#create some data for the bars
x5 = DataFrame(Possibilities = 1, Credibility = pdf(Normal(1, 1.2), 1))
x6 = DataFrame(Possibilities = 2, Credibility = pdf(Normal(2, 1.2), 2))
x7 = DataFrame(Possibilities = 3, Credibility = pdf(Normal(3, 1.2), 3))
x8 = DataFrame(Possibilities = 4, Credibility = pdf(Normal(4, 1.2), 4))

plot(layer(x5, x = :Possibilities, y = :Credibility, Geom.bar(orientation = :vertical)),
    layer(x6, x = :Possibilities, y = :Credibility, Geom.bar(orientation = :vertical)),
    layer(x7, x = :Possibilities, y = :Credibility, Geom.bar(orientation = :vertical)),
    layer(x8, x = :Possibilities, y = :Credibility, Geom.bar(orientation = :vertical)),
    layer(x->pdf(Normal(1, 1.2), x), -6, 10, color=[colorant"black"], order=1),
    layer(x->pdf(Normal(2, 1.2), x), -6, 10, color=[colorant"black"],order=1),
    layer(x->pdf(Normal(3, 1.2), x), -6, 10, color=[colorant"black"],order=1),
    layer(x->pdf(Normal(4, 1.2), x), -6, 10, color=[colorant"black"],order=1),
    Guide.xticks(ticks=ticks),
    Guide.xlabel("Possibilities"),
    Guide.ylabel("Credibility"),
    Guide.title("Prior"),
     Theme(
        background_color = "white",
        grid_color = "white",
        bar_spacing = 1mm,
        key_position = :none
    ))
Possibilities 1 2 3 4 h,j,k,l,arrows,drag to pan i,o,+,-,scroll,shift-drag to zoom r,dbl-click to reset c for coordinates ? for help ? -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 -0.5 0.0 0.5 1.0 -0.40 -0.38 -0.36 -0.34 -0.32 -0.30 -0.28 -0.26 -0.24 -0.22 -0.20 -0.18 -0.16 -0.14 -0.12 -0.10 -0.08 -0.06 -0.04 -0.02 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 0.22 0.24 0.26 0.28 0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44 0.46 0.48 0.50 0.52 0.54 0.56 0.58 0.60 0.62 0.64 0.66 0.68 0.70 0.72 0.74 0.76 0.78 0.80 0.82 Credibility Prior

Now we can use the same method to create the bottom panel of Figure 2.3. Here I will just hide the y axis. What is important to note is just how the assignment of credibility changes based on our 3 observations (shown in red).

ticks = [1, 2, 3, 4] #specify the tick marks

#create some data for the bars
x5 = DataFrame(Possibilities = 1, Credibility = pdf(Normal(1, 8), 1))
x6 = DataFrame(Possibilities = 2, Credibility = pdf(Normal(2, 3), 2))
x7 = DataFrame(Possibilities = 3, Credibility = pdf(Normal(3, 5), 3))
x8 = DataFrame(Possibilities = 4, Credibility = pdf(Normal(4, 20), 4))
x9 = DataFrame(Possibilities = [1.75, 2.25, 2.75], Credibility = zeros(3))

plot(layer(x5, x = :Possibilities, y = :Credibility, Geom.bar(orientation = :vertical)),
    layer(x6, x = :Possibilities, y = :Credibility, Geom.bar(orientation = :vertical)),
    layer(x7, x = :Possibilities, y = :Credibility, Geom.bar(orientation = :vertical)),
    layer(x8, x = :Possibilities, y = :Credibility, Geom.bar(orientation = :vertical)),
    layer(x->pdf(Normal(1, 8), x), -6, 10, color=[colorant"black"], order=1),
    layer(x->pdf(Normal(2, 3), x), -6, 10, color=[colorant"black"],order=1),
    layer(x->pdf(Normal(3, 5), x), -6, 10, color=[colorant"black"],order=1),
    layer(x->pdf(Normal(4, 20), x), -6, 10, color=[colorant"black"],order=1),
    layer(x9, x = :Possibilities, y = :Credibility, Geom.point, color=[colorant"red"],size =[.2], order=2), #add data
    Guide.xticks(ticks=ticks),
    Guide.yticks(ticks=[]),
    Guide.xlabel("Possibilities"),
    Guide.ylabel("Credibility"),
    Guide.title("Posterior"),
    Coord.Cartesian(ymin=0,ymax=0.3),
     Theme(
        background_color = "white",
        grid_color = "white",
        bar_spacing = 1mm,
        key_position = :none
    ))
Possibilities 1 2 3 4 h,j,k,l,arrows,drag to pan i,o,+,-,scroll,shift-drag to zoom r,dbl-click to reset c for coordinates ? for help ? Credibility Posterior

2.2 Possibilities are parameter values in descriptive models

In the last section, we used the Normal() function to make curves following Gaussian distribution. Now we will showcase the power and flexibility of the Distribution.jl package by assigning a distribution to a name and drawing random samples from that object. By doing so we can simulate data from a the specified normal distribution.

using Random
# set the seed to make the simulation reproducible
Random.seed!(2022) 

# assign d `Normal()` with μ 10 σ = 5 
d = Normal(10,5)
# simulate the data with `rand()` from the specified distribution
simulated_data = DataFrame(x = rand(d, 2000))

#plot it
#Figure 2.4.a.
plot(layer(simulated_data, x=:x, Geom.histogram(density=true)),
    layer(x->pdf(Normal(10, 5), x), -8, 27, color=[colorant"black"],order=1),
    Guide.xlabel("Data Values"),
    Guide.ylabel("Data Probability"),
    Guide.title("The candidate normal distribution\nhas a μ of 10 and σ of 5."),
    Theme(
        background_color = "white",
        grid_color = "white",
        line_width = 1mm
    )
)
Data Values -60 -50 -40 -30 -20 -10 0 10 20 30 40 50 60 70 80 -50 -45 -40 -35 -30 -25 -20 -15 -10 -5 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 -50 0 50 100 -50 -48 -46 -44 -42 -40 -38 -36 -34 -32 -30 -28 -26 -24 -22 -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 h,j,k,l,arrows,drag to pan i,o,+,-,scroll,shift-drag to zoom r,dbl-click to reset c for coordinates ? for help ? -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 -0.16 -0.15 -0.14 -0.13 -0.12 -0.11 -0.10 -0.09 -0.08 -0.07 -0.06 -0.05 -0.04 -0.03 -0.02 -0.01 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15 0.16 0.17 0.18 0.19 0.20 0.21 0.22 0.23 0.24 0.25 0.26 0.27 0.28 0.29 0.30 0.31 -0.2 0.0 0.2 0.4 -0.155 -0.150 -0.145 -0.140 -0.135 -0.130 -0.125 -0.120 -0.115 -0.110 -0.105 -0.100 -0.095 -0.090 -0.085 -0.080 -0.075 -0.070 -0.065 -0.060 -0.055 -0.050 -0.045 -0.040 -0.035 -0.030 -0.025 -0.020 -0.015 -0.010 -0.005 0.000 0.005 0.010 0.015 0.020 0.025 0.030 0.035 0.040 0.045 0.050 0.055 0.060 0.065 0.070 0.075 0.080 0.085 0.090 0.095 0.100 0.105 0.110 0.115 0.120 0.125 0.130 0.135 0.140 0.145 0.150 0.155 0.160 0.165 0.170 0.175 0.180 0.185 0.190 0.195 0.200 0.205 0.210 0.215 0.220 0.225 0.230 0.235 0.240 0.245 0.250 0.255 0.260 0.265 0.270 0.275 0.280 0.285 0.290 0.295 0.300 0.305 Data Probability The candidate normal distributionhas a μ of 10 and σ of 5.
#Figure 2.4.b.
plot(layer(simulated_data, x=:x, Geom.histogram(density=true)),
    layer(x->pdf(Normal(8, 6), x), -8, 27, color=[colorant"black"],order=1),
    Guide.xlabel("Data Values"),
    Guide.ylabel("Data Probability"),
    Guide.title("The candidate normal distribution\nhas a μ of 8 and σ of 6."),
    Theme(
        background_color = "white",
        grid_color = "white",
        line_style = [:dash],
        line_width = 1mm
    )
)
Data Values -60 -50 -40 -30 -20 -10 0 10 20 30 40 50 60 70 80 -50 -45 -40 -35 -30 -25 -20 -15 -10 -5 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 -50 0 50 100 -50 -48 -46 -44 -42 -40 -38 -36 -34 -32 -30 -28 -26 -24 -22 -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 h,j,k,l,arrows,drag to pan i,o,+,-,scroll,shift-drag to zoom r,dbl-click to reset c for coordinates ? for help ? -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 -0.16 -0.15 -0.14 -0.13 -0.12 -0.11 -0.10 -0.09 -0.08 -0.07 -0.06 -0.05 -0.04 -0.03 -0.02 -0.01 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15 0.16 0.17 0.18 0.19 0.20 0.21 0.22 0.23 0.24 0.25 0.26 0.27 0.28 0.29 0.30 0.31 -0.2 0.0 0.2 0.4 -0.155 -0.150 -0.145 -0.140 -0.135 -0.130 -0.125 -0.120 -0.115 -0.110 -0.105 -0.100 -0.095 -0.090 -0.085 -0.080 -0.075 -0.070 -0.065 -0.060 -0.055 -0.050 -0.045 -0.040 -0.035 -0.030 -0.025 -0.020 -0.015 -0.010 -0.005 0.000 0.005 0.010 0.015 0.020 0.025 0.030 0.035 0.040 0.045 0.050 0.055 0.060 0.065 0.070 0.075 0.080 0.085 0.090 0.095 0.100 0.105 0.110 0.115 0.120 0.125 0.130 0.135 0.140 0.145 0.150 0.155 0.160 0.165 0.170 0.175 0.180 0.185 0.190 0.195 0.200 0.205 0.210 0.215 0.220 0.225 0.230 0.235 0.240 0.245 0.250 0.255 0.260 0.265 0.270 0.275 0.280 0.285 0.290 0.295 0.300 0.305 Data Probability The candidate normal distributionhas a μ of 8 and σ of 6.

2.3 The steps of Bayesian data analysis

In general, Bayesian analysis of data follows these steps:

  1. Identify the data relevant to the research questions. What are the measurement scales of the data? Which data variables are to be predicted, and which data variables are supposed to act as predictors?
  2. Define a descriptive model for the relevant data. The mathematical form and its parameters should be meaningful and appropriate to the theoretical purposes of the analysis.
  3. Specify a prior distribution on the parameters. The prior must pass muster with the audience of the analysis, such as skeptical scientists.
  4. Use Bayesian inference to re-allocate credibility across parameter values. Interpret the posterior distribution with respect to theoretically meaningful issues (assuming that the model is a reasonable description of the data; see next step).
  5. Check that the posterior predictions mimic the data with reasonable accuracy (i.e., conduct a “posterior predictive check”). If not, then consider a different descriptive model.

Perhaps the best way to explain these steps is with a realistic example of Bayesian data analysis. The discussion that follows is abbreviated for purposes of this introductory chapter, with many technical details suppressed. (p. 25)

I will showcase the entire workflow here. In later chapters we’ll cover this workflow in much more detail.

First we need to generate the data and fit a model to those data. In HtWtDataDenerator.R script, Kruschke provided the code for a function that will simulate height/weight data. Lets rewrite that function in Julia:

using Distributions, Random, DataFrames, StatsBase

function HtWtDataGenerator(nSubj, rndsd = nothing, maleProb = 0.50) 
    # Random height, weight generator for males and females. Uses parameters from
    # Brainard, J. & Burmaster, D. E. (1992). Bivariate distributions for height and
    # weight of men and women in the United States. Risk Analysis, 12(2), 267-275.
    # Kruschke, J. K. (2011). Doing Bayesian data analysis:
    # A Tutorial with R and BUGS. Academic Press / Elsevier.
    # Kruschke, J. K. (2014). Doing Bayesian data analysis, 2nd Edition:
    # A Tutorial with R, JAGS and Stan. Academic Press / Elsevier.
    
    # Specify parameters of multivariate normal (MVN) distributions.
  
    # Men:
    HtMmu   = 69.18
    HtMsd   = 2.87
    lnWtMmu = 5.14
    lnWtMsd = 0.17
    Mrho    = 0.42
    Mmean   = [HtMmu, lnWtMmu]
    Msigma  = [HtMsd^2 Mrho * HtMsd * lnWtMsd; Mrho * HtMsd * lnWtMsd lnWtMsd^2]
    # Women cluster 1:
    HtFmu1   = 63.11
    HtFsd1   = 2.76
    lnWtFmu1 = 5.06
    lnWtFsd1 = 0.24
    Frho1    = 0.41
    prop1    = 0.46
    Fmean1   = [HtFmu1, lnWtFmu1]
    Fsigma1  = [HtFsd1^2 Frho1 * HtFsd1 * lnWtFsd1; Frho1 * HtFsd1 * lnWtFsd1  lnWtFsd1^2]
    # Women cluster 2:
    HtFmu2   = 64.36
    HtFsd2   = 2.49
    lnWtFmu2 = 4.86
    lnWtFsd2 = 0.14
    Frho2    = 0.44
    prop2    = 1 - prop1
    Fmean2   = [HtFmu2, lnWtFmu2]
    Fsigma2  = [HtFsd2^2  Frho2 * HtFsd2 * lnWtFsd2; Frho2 * HtFsd2 * lnWtFsd2  lnWtFsd2^2]

    # Randomly generate data values from those MVN distributions.
    if rndsd !== nothing 
        Random.seed!(rndsd) 
    end
    datamatrix =  DataFrame(zeros(nSubj, 3), ["male", "height", "weight"])
    # arbitrary coding values
    maleval = 1
    femaleval = 0

    for i in 1:nSubj
        # Flip coin to decide sex
        sex = wsample([maleval, femaleval], [maleProb, 1 - maleProb], 1)

        if sex[1] == maleval
            datum = rand(MvNormal(Mmean, Msigma))
        elseif sex[1] == femaleval
            Fclust = wsample([1, 2], [prop1, prop2], 1)

            if Fclust[1] == 1
            datum = rand(MvNormal(Fmean1, Fsigma1))
            else
            datum = rand(MvNormal(Fmean2, Fsigma2))
            end
        end
        datum[2] = exp(datum[2])
        datamatrix[i, :] = [sex; round.(datum, digits = 1)]
    end

    return datamatrix

end
HtWtDataGenerator (generic function with 3 methods)

The HtWtDataGenerator() function has two arguments. The nSubj argument determines how many values to generate, and maleProb determines how probable we want those values to be from men.

5 rows × 3 columns

maleheightweight
Float64Float64Float64
11.066.0141.2
20.060.4104.3
30.062.7139.4
40.061.7109.9
51.064.4162.5

We’re about ready for the model, which we will fit it with the Hamiltonian Monte Carlo (HMC) method via the turing.jl package. We’ll introduce turing.jl more fully in Chapter 8. I also recommend you go check out the following and resources:

Following Solomon Kurz’s example, we’ll use weakly-regularizing priors for the intercept and slope and a half Cauchy with a fairly large scale parameter for \(\sigma\)

using Turing
using Optim
using MCMCChains, Plots, StatsPlots, Gadfly
using AbstractMCMC
# Define the model
# linear regression.
@model function linear_regression(x,y)
    n = length(y)

    # Set variance prior.
    σ ~  Truncated(Cauchy(0,10),0,Inf)
    # Set intercept prior.
    α ~ Normal(0,100)
    # Set the prior for beta.
    β  ~ Normal(0,100)

    for i in 1:n
        y[i] ~ Normal+ β * x[i], σ)
    end

    return y
end

#to be concrete we assign the values to x and y
x = d.height
y = d.weight

model = linear_regression(x, y)

#  Run sampler, collect results
chain_lin_reg = sample(model, NUTS(500, 0.65),MCMCDistributed(),2_000, 4);

If you wanted a model summary, you could execute display(chain_lin_reg). For more detail, see Chapter 8. Here’s Figure 2.5.a.

#store in a Dataframe
plot_df = DataFrame(sample(chain_lin_reg[:,:,:], 57))
plot_df.x = x
plot_df.y = y

#plot
abline = Geom.abline(color="red", style=:dash)
Gadfly.plot(plot_df,
    x=:x, y=:y, Geom.point,color = [colorant"black"], intercept=:α, slope=:β, abline, 
    Guide.xlabel("Height in inches"),
    Guide.ylabel("Weight in pounds"),
    Guide.title("Data with 57 credible regression lines"),
    Theme(
        background_color = "white",
        grid_color = "white",
        line_style = [:dash],
        line_width = .1mm
    ))
Height in inches 40 45 50 55 60 65 70 75 80 85 90 95 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 40 60 80 100 45.0 45.5 46.0 46.5 47.0 47.5 48.0 48.5 49.0 49.5 50.0 50.5 51.0 51.5 52.0 52.5 53.0 53.5 54.0 54.5 55.0 55.5 56.0 56.5 57.0 57.5 58.0 58.5 59.0 59.5 60.0 60.5 61.0 61.5 62.0 62.5 63.0 63.5 64.0 64.5 65.0 65.5 66.0 66.5 67.0 67.5 68.0 68.5 69.0 69.5 70.0 70.5 71.0 71.5 72.0 72.5 73.0 73.5 74.0 74.5 75.0 75.5 76.0 76.5 77.0 77.5 78.0 78.5 79.0 79.5 80.0 80.5 81.0 81.5 82.0 82.5 83.0 83.5 84.0 84.5 85.0 85.5 86.0 86.5 87.0 87.5 88.0 88.5 89.0 89.5 90.0 h,j,k,l,arrows,drag to pan i,o,+,-,scroll,shift-drag to zoom r,dbl-click to reset c for coordinates ? for help ? -400 -300 -200 -100 0 100 200 300 400 500 600 700 -300 -280 -260 -240 -220 -200 -180 -160 -140 -120 -100 -80 -60 -40 -20 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420 440 460 480 500 520 540 560 580 600 -300 0 300 600 -300 -290 -280 -270 -260 -250 -240 -230 -220 -210 -200 -190 -180 -170 -160 -150 -140 -130 -120 -110 -100 -90 -80 -70 -60 -50 -40 -30 -20 -10 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 210 220 230 240 250 260 270 280 290 300 310 320 330 340 350 360 370 380 390 400 410 420 430 440 450 460 470 480 490 500 510 520 530 540 550 560 570 580 590 600 Weight in pounds Data with 57 credible regression lines

For Figure 2.5.b., we’ll mark off the mode and 95% highest density interval (HDI).

df = DataFrame(chain_lin_reg)

plt_HDI = DataFrame(quantile(group(chain_lin_reg, :β)))
plt_HDI.β .= mode(chain_lin_reg[:,:β,:])

plot(layer(df, x=:β, Geom.histogram(bincount = 50)),
    layer(plt_HDI, x = :β,xmin="2.5%", xmax="97.5%",Geom.point, Geom.errorbar, color=[colorant"red"],size =[.1], order=1),
    Guide.xlabel("β(slope)"),
    Guide.title("The posterior distribution \n
    The mode and 95% HDI intervals are\nthe dot and horizontal line at the bottom."),
    Guide.yticks(ticks=[]),
    Theme(
        background_color = "white",
        grid_color = "white", 
        bar_spacing = 0.25mm,
        line_width = 1mm
))
β(slope) -12.5 -10.0 -7.5 -5.0 -2.5 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 22.5 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 -10 0 10 20 -10.0 -9.5 -9.0 -8.5 -8.0 -7.5 -7.0 -6.5 -6.0 -5.5 -5.0 -4.5 -4.0 -3.5 -3.0 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 10.0 10.5 11.0 11.5 12.0 12.5 13.0 13.5 14.0 14.5 15.0 15.5 16.0 16.5 17.0 17.5 18.0 18.5 19.0 19.5 20.0 h,j,k,l,arrows,drag to pan i,o,+,-,scroll,shift-drag to zoom r,dbl-click to reset c for coordinates ? for help ? The posterior distribution The mode and 95% HDI intervals arethe dot and horizontal line at the bottom.

To make Figure 2.6, we use the predict() function. First we create a model object with missing values for our dependent variable, then we will predict those values using our posterior samples.

# extract the posterior draws
model_pred = linear_regression(x,Vector{Missing}(missing, length(y)))
posterior_check = predict(model_pred, chain_lin_reg)

#get the summary stats for plotting
posterior_res = summarystats(posterior_check)
#create DataFrame
plot_df = DataFrame(quantile(posterior_check))
plot_df.x = x
plot_df.y = y
plot_df.predictions = posterior_res[:,:mean]

#plot
Gadfly.plot(
    layer(plot_df, x=:x, y=:y, Geom.point,color = [colorant"black"]),
    layer(plot_df, x=:x, y=:predictions,ymin="2.5%", ymax="97.5%",Geom.point,Geom.errorbar, color = [colorant"grey"]),Guide.xlabel("Height in inches"),
    Guide.ylabel("Weight in pounds"),
    Guide.title("Data with the percentile-based 95% intervals and\nthe means of the posterior predictions"),
    Theme(
        background_color = "white",
        grid_color = "white",
        line_width = .1mm
    ))
Height in inches 40 45 50 55 60 65 70 75 80 85 90 95 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 40 60 80 100 45.0 45.5 46.0 46.5 47.0 47.5 48.0 48.5 49.0 49.5 50.0 50.5 51.0 51.5 52.0 52.5 53.0 53.5 54.0 54.5 55.0 55.5 56.0 56.5 57.0 57.5 58.0 58.5 59.0 59.5 60.0 60.5 61.0 61.5 62.0 62.5 63.0 63.5 64.0 64.5 65.0 65.5 66.0 66.5 67.0 67.5 68.0 68.5 69.0 69.5 70.0 70.5 71.0 71.5 72.0 72.5 73.0 73.5 74.0 74.5 75.0 75.5 76.0 76.5 77.0 77.5 78.0 78.5 79.0 79.5 80.0 80.5 81.0 81.5 82.0 82.5 83.0 83.5 84.0 84.5 85.0 85.5 86.0 86.5 87.0 87.5 88.0 88.5 89.0 89.5 90.0 h,j,k,l,arrows,drag to pan i,o,+,-,scroll,shift-drag to zoom r,dbl-click to reset c for coordinates ? for help ? -400 -300 -200 -100 0 100 200 300 400 500 600 700 -300 -280 -260 -240 -220 -200 -180 -160 -140 -120 -100 -80 -60 -40 -20 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420 440 460 480 500 520 540 560 580 600 -300 0 300 600 -300 -290 -280 -270 -260 -250 -240 -230 -220 -210 -200 -190 -180 -170 -160 -150 -140 -130 -120 -110 -100 -90 -80 -70 -60 -50 -40 -30 -20 -10 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 210 220 230 240 250 260 270 280 290 300 310 320 330 340 350 360 370 380 390 400 410 420 430 440 450 460 470 480 490 500 510 520 530 540 550 560 570 580 590 600 Weight in pounds Data with the percentile-based 95% intervals andthe means of the posterior predictions

The posterior predictions might be easier to depict with a ribbon and line, instead. Luckily Gadfly.jl makes it really easy to change.

Gadfly.plot(
    layer(plot_df, x=:x, y=:y, Geom.point,color = [colorant"black"]),
    layer(plot_df, x=:x, y=:predictions,ymin="2.5%", ymax="97.5%",Geom.line,Geom.ribbon, color = [colorant"red"]),Guide.xlabel("Height in inches"),
    Guide.ylabel("Weight in pounds"),
    Guide.title("Data with the percentile-based 95% intervals and\nthe means of the posterior predictions"),
    Theme(
        background_color = "white",
        grid_color = "white",
        line_width = .2mm
    ))
Height in inches 40 45 50 55 60 65 70 75 80 85 90 95 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 40 60 80 100 45.0 45.5 46.0 46.5 47.0 47.5 48.0 48.5 49.0 49.5 50.0 50.5 51.0 51.5 52.0 52.5 53.0 53.5 54.0 54.5 55.0 55.5 56.0 56.5 57.0 57.5 58.0 58.5 59.0 59.5 60.0 60.5 61.0 61.5 62.0 62.5 63.0 63.5 64.0 64.5 65.0 65.5 66.0 66.5 67.0 67.5 68.0 68.5 69.0 69.5 70.0 70.5 71.0 71.5 72.0 72.5 73.0 73.5 74.0 74.5 75.0 75.5 76.0 76.5 77.0 77.5 78.0 78.5 79.0 79.5 80.0 80.5 81.0 81.5 82.0 82.5 83.0 83.5 84.0 84.5 85.0 85.5 86.0 86.5 87.0 87.5 88.0 88.5 89.0 89.5 90.0 h,j,k,l,arrows,drag to pan i,o,+,-,scroll,shift-drag to zoom r,dbl-click to reset c for coordinates ? for help ? -400 -300 -200 -100 0 100 200 300 400 500 600 700 -300 -280 -260 -240 -220 -200 -180 -160 -140 -120 -100 -80 -60 -40 -20 0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 380 400 420 440 460 480 500 520 540 560 580 600 -300 0 300 600 -300 -290 -280 -270 -260 -250 -240 -230 -220 -210 -200 -190 -180 -170 -160 -150 -140 -130 -120 -110 -100 -90 -80 -70 -60 -50 -40 -30 -20 -10 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 210 220 230 240 250 260 270 280 290 300 310 320 330 340 350 360 370 380 390 400 410 420 430 440 450 460 470 480 490 500 510 520 530 540 550 560 570 580 590 600 Weight in pounds Data with the percentile-based 95% intervals andthe means of the posterior predictions

“We have seen the five steps of Bayesian analysis in a fairly realistic example. This book explains how to do this sort of analysis for many different applications and types of descriptive models” (p. 30).

That’s it! We can now start digging into the details that make up each of the steps outlined in this introductory chapter. Learning Bayesian inference is a rewarding experience. I wish you the best of luck on this exciting journey!

2.4 Session info

Pkg.status()
      Status `~/.julia/environments/v1.7/Project.toml`
  [80f14c24] AbstractMCMC v4.1.3
  [c7e460c6] ArgParse v1.1.4
  [c52e3926] Atom v0.12.38
  [76274a88] Bijectors v0.10.6
  [e28b5b4c] Bootstrap v2.3.3
  [336ed68f] CSV v0.10.4
  [a81c6b42] Compose v0.9.4
  [a93c6f00] DataFrames v1.3.4
  [1313f7d8] DataFramesMeta v0.12.0
  [864edb3b] DataStructures v0.18.13
  [31c24e10] Distributions v0.25.70
  [c91e804a] Gadfly v1.3.4
  [c27321d9] Glob v1.3.0
  [7073ff75] IJulia v1.23.3
  [e5e0dc1b] Juno v0.8.4
  [c7f686f2] MCMCChains v5.3.1
  [c03570c3] Memoize v0.4.4
  [a15396b6] OnlineStats v1.5.14
  [429524aa] Optim v1.7.2
  [d96e819e] Parameters v0.12.3
  [91a5bcdd] Plots v1.31.7
  [1fd47b50] QuadGK v2.5.0
  [295af30f] Revise v3.4.0
  [ed01d8cd] Sobol v1.5.0
  [03a91e81] SplitApplyCombine v1.2.2
  [90137ffa] StaticArrays v1.5.6
  [2913bbd2] StatsBase v0.33.21
  [4c63d2b9] StatsFuns v1.0.1
  [f3b207a7] StatsPlots v0.14.34
  [fd094767] Suppressor v0.2.1
  [bd369af6] Tables v1.7.0
  [fce5fe82] Turing v0.21.12
  [9d95f2ec] TypedTables v1.4.1
  [9a3f8284] Random