Abstract
Modern machine learning methods often require more data for training than a
single expert can provide. Therefore, it has become a standard procedure to
collect data from external sources, e.g. via crowdsourcing. Unfortunately, the
quality of these sources is not always guaranteed. As additional complications,
the data might be stored in a distributed way, or might even have to remain
private. In this work, we address the question of how to learn robustly in such
scenarios. Studying the problem through the lens of statistical learning
theory, we derive a procedure that allows for learning from all available
sources, yet automatically suppresses irrelevant or corrupted data. We show by
extensive experiments that our method provides significant improvements over
alternative approaches from robust statistics and distributed optimization.
Users
Please
log in to take part in the discussion (add own reviews or comments).