@giwebb

Solving Regression Problems using Competitive Ensemble Models

, , and . Lecture Notes in Computer Science Vol. 2557: Proceedings of the 15th Australian Joint Conference on Artificial Intelligence (AI 02), page 511-522. Berlin/Heidelberg, Springer, (2002)

Abstract

The use of ensemble models in many problem domains has increased significantly in the last few years. The ensemble modelling, in particularly boosting, has shown a great promise in improving predictive performance of a model. Combining the ensemble members is normally done in a co-operative fashion where each of the ensemble members performs the same task and their predictions are aggregated to obtain the improved performance. However, it is also possible to combine the ensemble members in a competitive fashion where the best prediction of a relevant ensemble member is selected for a particular input. This option has been previously somewhat overlooked. The aim of this article is to investigate and compare the competitive and co-operative approaches to combining the models in the ensemble. A comparison is made between a competitive ensemble model and that of MARS with bagging, mixture of experts, hierarchical mixture of experts and a neural network ensemble over several public domain regression problems that have a high degree of nonlinearity and noise. The empirical results show a substantial advantage of competitive learning versus the co-operative learning for all the regression problems investigated. The requirements for creating the efficient ensembles and the available guidelines are also discussed.

Links and resources

Tags

community

  • @giwebb
  • @dblp
@giwebb's tags highlighted