Overview
- Recap of previous methods for instance-level explanation
- Break-down plots (with and without interactions)
- Shapley Additive Explanations (SHAP)
- Break-down plots (with and without interactions)
- Previous approaches may be problematic with large set of predictors
- Too complex for human comprehension
- Computational times
- Introducing LIME
- Recent method (first published in 2016)
- Replace black-box model with glass-box model
- Sparse explanations
- Recent method (first published in 2016)