Computing least squares and Bayesian regression
In r, to fit a Bayesian regression you can use stan_glm
(which assume weakly informative default priors)
fit <- stan_glm(y ~ x, data = mydata)
If you prefer you can use lm
:
fit <- lm(y ~ x, data = mydata)
The results will be very similar.
##
## Call:
## lm(formula = vote ~ growth, data = hibbs)
##
## Residuals:
## Min 1Q Median 3Q Max
## -8.9929 -0.6674 0.2556 2.3225 5.3094
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 46.2476 1.6219 28.514 8.41e-14 ***
## growth 3.0605 0.6963 4.396 0.00061 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 3.763 on 14 degrees of freedom
## Multiple R-squared: 0.5798, Adjusted R-squared: 0.5498
## F-statistic: 19.32 on 1 and 14 DF, p-value: 0.00061
##
## Model Info:
## function: stan_glm
## family: gaussian [identity]
## formula: vote ~ growth
## algorithm: sampling
## sample: 4000 (posterior sample size)
## priors: see help('prior_summary')
## observations: 16
## predictors: 2
##
## Estimates:
## mean sd 10% 50% 90%
## (Intercept) 46.2 1.8 44.0 46.3 48.4
## growth 3.1 0.8 2.1 3.1 4.0
## sigma 4.0 0.8 3.1 3.9 5.1
##
## Fit Diagnostics:
## mean sd 10% 50% 90%
## mean_PPD 52.1 1.5 50.3 52.1 53.9
##
## The mean_ppd is the sample average posterior predictive distribution of the outcome variable (for details see help('summary.stanreg')).
##
## MCMC diagnostics
## mcse Rhat n_eff
## (Intercept) 0.0 1.0 2829
## growth 0.0 1.0 2915
## sigma 0.0 1.0 2299
## mean_PPD 0.0 1.0 3121
## log-posterior 0.0 1.0 1346
##
## For each parameter, mcse is Monte Carlo standard error, n_eff is a crude measure of effective sample size, and Rhat is the potential scale reduction factor on split chains (at convergence Rhat=1).
Also you can also use Python with Bambi:
model = bmb.Model('y ~ x', mydata)