Lazy Bayesian Rules: A Lazy Semi-Naive Bayesian Learning Technique Competitive to Boosting Decision Trees
Z. Zheng, G. Webb, and K. Ting. Proceedings of the Sixteenth International Conference on Machine Learning (ICML-99), page 493-502. San Francisco, Morgan Kaufmann, (1999)
Abstract
LBR is a lazy semi-naive Bayesian classifier learning technique, designed to alleviate the attribute interdependence problem of naive Bayesian classification. To classify a test example, it creates a conjunctive rule that selects a most appropriate subset of training examples and induces a local naive Bayesian classifier using this subset. LBR can significantly improve the performance of the naive Bayesian classifier. A bias and variance analysis of LBR reveals that it significantly reduces the bias of naive Bayesian classification at a cost of a slight increase in variance. It is interesting to compare this lazy technique with boosting and bagging, two well-known state-of-the-art non-lazy learning techniques. Empirical comparison of LBR with boosting decision trees on discrete valued data shows that LBR has, on average, significantly lower variance and higher bias. As a result of the interaction of these effects, the average prediction error of LBR over a range of learning tasks is at a level directly comparable to boosting. LBR provides a very competitive discrete valued learning technique where error minimization is the primary concern. It is very efficient when a single classifier is to be applied to classify few cases, such as in a typical incremental learning scenario.
%0 Conference Paper
%1 ZhengWebbTing99
%A Zheng, Z.
%A Webb, G. I.
%A Ting, K. M.
%B Proceedings of the Sixteenth International Conference on Machine Learning (ICML-99)
%C San Francisco
%D 1999
%E Bratko, I.
%E Dzeroski, S.
%I Morgan Kaufmann
%K Bayesian Conditional Estimation, Lazy Learning Learning, Probability Rules,
%P 493-502
%T Lazy Bayesian Rules: A Lazy Semi-Naive Bayesian Learning Technique Competitive to Boosting Decision Trees
%X LBR is a lazy semi-naive Bayesian classifier learning technique, designed to alleviate the attribute interdependence problem of naive Bayesian classification. To classify a test example, it creates a conjunctive rule that selects a most appropriate subset of training examples and induces a local naive Bayesian classifier using this subset. LBR can significantly improve the performance of the naive Bayesian classifier. A bias and variance analysis of LBR reveals that it significantly reduces the bias of naive Bayesian classification at a cost of a slight increase in variance. It is interesting to compare this lazy technique with boosting and bagging, two well-known state-of-the-art non-lazy learning techniques. Empirical comparison of LBR with boosting decision trees on discrete valued data shows that LBR has, on average, significantly lower variance and higher bias. As a result of the interaction of these effects, the average prediction error of LBR over a range of learning tasks is at a level directly comparable to boosting. LBR provides a very competitive discrete valued learning technique where error minimization is the primary concern. It is very efficient when a single classifier is to be applied to classify few cases, such as in a typical incremental learning scenario.
@inproceedings{ZhengWebbTing99,
abstract = {LBR is a lazy semi-naive Bayesian classifier learning technique, designed to alleviate the attribute interdependence problem of naive Bayesian classification. To classify a test example, it creates a conjunctive rule that selects a most appropriate subset of training examples and induces a local naive Bayesian classifier using this subset. LBR can significantly improve the performance of the naive Bayesian classifier. A bias and variance analysis of LBR reveals that it significantly reduces the bias of naive Bayesian classification at a cost of a slight increase in variance. It is interesting to compare this lazy technique with boosting and bagging, two well-known state-of-the-art non-lazy learning techniques. Empirical comparison of LBR with boosting decision trees on discrete valued data shows that LBR has, on average, significantly lower variance and higher bias. As a result of the interaction of these effects, the average prediction error of LBR over a range of learning tasks is at a level directly comparable to boosting. LBR provides a very competitive discrete valued learning technique where error minimization is the primary concern. It is very efficient when a single classifier is to be applied to classify few cases, such as in a typical incremental learning scenario.},
added-at = {2016-03-20T05:42:04.000+0100},
address = {San Francisco},
audit-trail = {Link via Citeseer},
author = {Zheng, Z. and Webb, G. I. and Ting, K. M.},
biburl = {https://www.bibsonomy.org/bibtex/26aba41235ba3537f2021f6795f729125/giwebb},
booktitle = {Proceedings of the Sixteenth International Conference on Machine Learning (ICML-99)},
editor = {Bratko, I. and Dzeroski, S.},
interhash = {e8d1046ce533a8bc0b8f7d936f635cff},
intrahash = {6aba41235ba3537f2021f6795f729125},
keywords = {Bayesian Conditional Estimation, Lazy Learning Learning, Probability Rules,},
location = {Bled, Slovenia},
pages = {493-502},
publisher = {Morgan Kaufmann},
timestamp = {2016-03-20T05:42:04.000+0100},
title = {Lazy Bayesian Rules: A Lazy Semi-Naive Bayesian Learning Technique Competitive to Boosting Decision Trees},
year = 1999
}