Understanding The Accuracy-Interpretability Trade-Off



Original Source Here

Model Interpretability

When we estimate an unknown function f we are mostly interested in either performing inference or prediction (and sometimes both).

When working in the inference setting, we usually treat the models as white boxes since we seek to understand the relationship between the input X and the output variable Y. Less flexible methods, tend to be more interpretable and thus are more suitable when performing inference.

On the other hand, more flexible methods (such as Support Vector Machines or Boosting) that are capable of estimating more complex shapes for the unknown function f are way less interpretable. This means that such methods may be less suitable in the inference setting. The more complicated the shape of the function, the more difficult will be to understand the relationship between the predictor variables (X) and the target variable Y.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: