Abstract
Cellular Genetic Programming for data classification
extended with the boosting technique to induce an
ensemble of predictors is presented. The method
implements in parallel AdaBoost.M2 to efficiently deal
with multi-class problems and it is able to manage
large data sets that do not fit in main memory since
each classifier is trained on a subset of the overall
training data. Experiments on several data sets show
that, by using a training set of reduced size, better
classification accuracy can be obtained at a much lower
computational cost.
Users
Please
log in to take part in the discussion (add own reviews or comments).