3.4 How has the model changed from last week ?

The example of Kasparov’s probability of beating Deep Blue at chess was a discrete example.

In that case, we greatly over-simplified reality to fit within the framework of introductory Bayesian models. Mainly, we assumed that \(\pi\) could only be 0.2, 0.5, or 0.8, the corresponding chances of which were defined by a discrete probability model.

  • However, in the reality of Michelle’s election support and Kasparov’s chess skill, \(\pi\) can be any value between 0 and 1. We can reflect this reality and conduct a more nuanced Bayesian analysis by constructing a continuous prior probability model.

So ….

  • Probability density functions for continuous models

  • rather than

  • Probability mass functions for discrete models