Invited Opinion Piece
in
Workshop: Debugging Machine Learning Models
Don’t debug your black box, replace it
Cynthia Rudin
Abstract:
Trying to explain black box models is not always a good idea - explanation models do not always agree with the black box models they are trying to explain, and can depend on different variables than the black boxes. This renders explanation models incomplete and incorrect; in fact, they can cause you to be more confused than you were with just the black box alone. In this talk I will explore the possibility of replacing black boxes with inherently interpretable models. Interpretable models are easier to debate and debug.
Chat is not available.
Successful Page Load