16.1 Introduction

Model’s interpretability is crucial for:

  • Business adoption
  • Model documentation
  • Regulatory oversight
  • Human acceptance and trust

Interpretable Machine Learning provides solutions to explain at different levels.

  • Global Interpretation explain how the model makes predictions, based on a holistic view of its features. They can explain:
    • which features are the most influential (via feature importance)
    • How the most influential variables drive the model output (via feature effects)
  • Local Interpretation becomes very important when the effect of most influential model’s features doesn’t the biggest influence for a particular observation as it helps us to understand what features are influencing the predicted response for a given observation or an small group of observations.
    • Local interpretable model-agnostic explanations (LIME)
    • Shapley values
    • Localized step-wise procedures