Will Booker: Evaluating GAM-like Neural Network Architectures for Interpretable Machine Learning

Members - Faculty, students, and collaborators
News - Recent news and publicty from members of the IDEA lab
Theses and Dissertations - Publications and code releases for student theses and disserations
Publications - Recent technical papers and presentations
Software - Recent software releases


In many machine learning applications, interpretability is of the utmost importance. Artificial intelligence is proliferating, but before you entrust your finances, your well-being, or even your life to a machine, you'd really like to be sure that it knows what it's doing.

As a human, the best way to evaluate an algorithm is to pick it apart, understand how it works, and figure out how it arrives at the decisions it does. Unfortunately, as machine learning techniques become more powerful and more complicated, reverse-engineering is becoming more difficult. Engineers often choose to implement a model that is accurate rather than one that understandable. In this work, we demonstrate a novel technique that, in certain circumstances, can be both.

This work introduces a novel neural network architecture that improves interpretability without sacrificing model accuracy. We test this architecture in a number of real-world classification datasets and demonstrate that it performs almost identically to state-of-the-art methods. We introduce \textit{Pandemic}, a novel image classification benchmark, to demonstrate that our architecture has further applications in deep-learning models.


William Booker, MS (2019): Evaluating GAM-like Neural Network Architectures for Interpretable Machine Learning. Master's Thesis, School of Computer Science, University of Oklahoma

Related publications and presentations


Code and data for this thesis are on GitHub.

Created by amcgovern [at] ou.edu.

Last modified September 26, 2019 9:43 AM