13.5 Maximum likelihood and Bayesian inference

  • Likelihood:

\[ p(y\mid\beta,X) = \prod_{i=1}^n(\text{logit}^{-1}(X_i\beta))^{y_i}(1-\text{logit}^{-1}(X_i\beta))^{1-y_i} \] * The \(\beta\) that maximizes this is the can be found by iterative techniques (implemented by glm for example)

  • Bayesian inference with uniform prior

    • with prior=NULL, prior_intercept=NULL this is same as maximum likelihood

    • Benefit is you get simulations of full posterior (not just maximum)!

    • But don’t do this, use priors! (At minimum provides some regularization)

  • stan_glm by default uses weakly informative priors

  • If prior information is available, use it!