Model Interpretability in Machine
Learning
December 2019
DOWNLOAD
Abstract
Interpretability is an increasingly vital issue in machine learning.
Computerized statistical modeling has become
the de facto paradigm for quantitative decision-making in any number
of fields, including healthcare, advertising,
investing, and more. And yet, the relative opacity of many of these
techniques can pose a real issue in sensitive
applications. Furthermore, the inability to interpret a model's
behavior removes an essential part of the feedback
loop for the practitioner, who needs to have a good understanding of
the model to know when it's bound to fail, or
where it can be improved. In this note, we first review the
canonical statistical machine learning problem, before
describing the issue of model interpretability and some of the
recent developments. We list some examples of
both interpretable and non-interpretable models and explain some of
the differences.
Related
Content