Ranking with Large Margin Principle: Two Approaches
A. Shashua, and A. Levin. (2003)There might be a way to introduce this types of ranking losses in the mmmf formulation..
Abstract
We discuss the problem of ranking instances with the use of a “large
margin” principle. We introduce two main approaches: the first is the
“fixed margin” policy in which the margin of the closest neighboring
classes is being maximized — which turns out to be a direct generalization of SVM to ranking learning. The second approach allows for
different margins where the sum of margins is maximized. This approach
is shown to reduce to
\nu-SVM when the number of classes. Both
approaches are optimal in size of
n where n is the total number of training
examples. Experiments performed on visual classification and “collaborative filtering” show that both approaches outperform existing ordinal
regression algorithms applied for ranking and multiclass SVM applied
to general multiclass classification.
Description
Two svm formulations of the ranking or rather ordinal regression are introduced: one which maximizes the individual margins separating the different classes and one which maximizes the sum of the margins. There might be a way to introduce this types of losses in the mmmf formulation.
%0 Conference Paper
%1 shale03
%A Shashua, Amnon
%A Levin, Anat
%D 2003
%E NIPS,
%K collaborativefiltering ranking svm
%T Ranking with Large Margin Principle: Two Approaches
%U http://books.nips.cc/papers/files/nips15/AA58.pdf
%X We discuss the problem of ranking instances with the use of a “large
margin” principle. We introduce two main approaches: the first is the
“fixed margin” policy in which the margin of the closest neighboring
classes is being maximized — which turns out to be a direct generalization of SVM to ranking learning. The second approach allows for
different margins where the sum of margins is maximized. This approach
is shown to reduce to
\nu-SVM when the number of classes. Both
approaches are optimal in size of
n where n is the total number of training
examples. Experiments performed on visual classification and “collaborative filtering” show that both approaches outperform existing ordinal
regression algorithms applied for ranking and multiclass SVM applied
to general multiclass classification.
@inproceedings{shale03,
abstract = {We discuss the problem of ranking instances with the use of a “large
margin” principle. We introduce two main approaches: the first is the
“fixed margin” policy in which the margin of the closest neighboring
classes is being maximized — which turns out to be a direct generalization of SVM to ranking learning. The second approach allows for
different margins where the sum of margins is maximized. This approach
is shown to reduce to
\nu-SVM when the number of classes. Both
approaches are optimal in size of
n where n is the total number of training
examples. Experiments performed on visual classification and “collaborative filtering” show that both approaches outperform existing ordinal
regression algorithms applied for ranking and multiclass SVM applied
to general multiclass classification.
},
added-at = {2009-09-28T17:33:52.000+0200},
author = {Shashua, Amnon and Levin, Anat},
biburl = {https://www.bibsonomy.org/bibtex/22c025a67423eff43a13d9db35dadeb4a/alexis99},
description = {Two svm formulations of the ranking or rather ordinal regression are introduced: one which maximizes the individual margins separating the different classes and one which maximizes the sum of the margins. There might be a way to introduce this types of losses in the mmmf formulation.},
editor = {NIPS},
interhash = {20c44188b32fe08a6ea3c5f63731099b},
intrahash = {2c025a67423eff43a13d9db35dadeb4a},
keywords = {collaborativefiltering ranking svm},
note = {There might be a way to introduce this types of ranking losses in the mmmf formulation. },
timestamp = {2009-09-28T17:33:52.000+0200},
title = {Ranking with Large Margin Principle: Two Approaches},
url = {http://books.nips.cc/papers/files/nips15/AA58.pdf},
year = 2003
}