Learning Correspondence from the Cycle-Consistency of Time
X. Wang, A. Jabri, und A. Efros. (2019)cite arxiv:1903.07593Comment: CVPR 2019 Oral. Project page: http://ajabri.github.io/timecycle.
Zusammenfassung
We introduce a self-supervised method for learning visual correspondence from
unlabeled video. The main idea is to use cycle-consistency in time as free
supervisory signal for learning visual representations from scratch. At
training time, our model learns a feature map representation to be useful for
performing cycle-consistent tracking. At test time, we use the acquired
representation to find nearest neighbors across space and time. We demonstrate
the generalizability of the representation -- without finetuning -- across a
range of visual correspondence tasks, including video object segmentation,
keypoint tracking, and optical flow. Our approach outperforms previous
self-supervised methods and performs competitively with strongly supervised
methods.
Beschreibung
Learning Correspondence from the Cycle-Consistency of Time
%0 Generic
%1 wang2019learning
%A Wang, Xiaolong
%A Jabri, Allan
%A Efros, Alexei A.
%D 2019
%K arch loss pose semisup tracking video
%T Learning Correspondence from the Cycle-Consistency of Time
%U http://arxiv.org/abs/1903.07593
%X We introduce a self-supervised method for learning visual correspondence from
unlabeled video. The main idea is to use cycle-consistency in time as free
supervisory signal for learning visual representations from scratch. At
training time, our model learns a feature map representation to be useful for
performing cycle-consistent tracking. At test time, we use the acquired
representation to find nearest neighbors across space and time. We demonstrate
the generalizability of the representation -- without finetuning -- across a
range of visual correspondence tasks, including video object segmentation,
keypoint tracking, and optical flow. Our approach outperforms previous
self-supervised methods and performs competitively with strongly supervised
methods.
@misc{wang2019learning,
abstract = {We introduce a self-supervised method for learning visual correspondence from
unlabeled video. The main idea is to use cycle-consistency in time as free
supervisory signal for learning visual representations from scratch. At
training time, our model learns a feature map representation to be useful for
performing cycle-consistent tracking. At test time, we use the acquired
representation to find nearest neighbors across space and time. We demonstrate
the generalizability of the representation -- without finetuning -- across a
range of visual correspondence tasks, including video object segmentation,
keypoint tracking, and optical flow. Our approach outperforms previous
self-supervised methods and performs competitively with strongly supervised
methods.},
added-at = {2019-04-09T23:02:43.000+0200},
author = {Wang, Xiaolong and Jabri, Allan and Efros, Alexei A.},
biburl = {https://www.bibsonomy.org/bibtex/2bb7181faee81822bb54d537929acd827/nmatsuk},
description = {Learning Correspondence from the Cycle-Consistency of Time},
interhash = {a4d543ad46a6e11b3cf34c4e9d689020},
intrahash = {bb7181faee81822bb54d537929acd827},
keywords = {arch loss pose semisup tracking video},
note = {cite arxiv:1903.07593Comment: CVPR 2019 Oral. Project page: http://ajabri.github.io/timecycle},
timestamp = {2019-04-09T23:02:43.000+0200},
title = {Learning Correspondence from the Cycle-Consistency of Time},
url = {http://arxiv.org/abs/1903.07593},
year = 2019
}