We introduce an ensembled classifier of Bayesian Networks, EBNs, that automatically learns the structure of a set of Bayesian Networks. Our experiments demonstrate that EBNs have many of the benefits of Random Forests when compared with the classification performance of traditional Bayesian Networks. Specifically, we highlight the success of EBNs as a model for classification when trained on small datasets. Similar to Random Forests, and contrary to traditional Bayesian Networks, EBNs do not overfit. Additionally, EBNs outperform or perform as well as traditional Bayesian Networks when tasked with instance classification. EBNs also provide a measure of variable importance and can be generated efficiently on a variety of datasets. Using datasets from the standard Bayesian Network Repository we show that EBNs perform equal to or better than traditional Bayesian Networks when used for classification. We analyze a case study in engineering education to illustrate the performance benefits of EBNs on small datasets and the ability of EBNs to identify important variables within datasets. We focus on predicting retention outcomes of engineering students and prioritizing factors related to their retention in engineering education.
Utz, Christopher. (2010). Learning Ensembles of Bayesian Network Structures Using Random Forest Techniques. Master's Thesis, School of Computer Science, University of Oklahoma
The code below is a beta release. We plan a 1.0 release in the near future. Note that this code relies on the PowerBayes package as well.
Created by amcgovern [at] ou.edu.
Last modified June 12, 2017 12:57 PM