Abstract
This work investigates learning pixel-wise semantic image segmentation in
urban scenes without any manual annotation, just from the raw non-curated data
collected by cars which, equipped with cameras and LiDAR sensors, drive around
a city. Our contributions are threefold. First, we propose a novel method for
cross-modal unsupervised learning of semantic image segmentation by leveraging
synchronized LiDAR and image data. The key ingredient of our method is the use
of an object proposal module that analyzes the LiDAR point cloud to obtain
proposals for spatially consistent objects. Second, we show that these 3D
object proposals can be aligned with the input images and reliably clustered
into semantically meaningful pseudo-classes. Finally, we develop a cross-modal
distillation approach that leverages image data partially annotated with the
resulting pseudo-classes to train a transformer-based model for image semantic
segmentation. We show the generalization capabilities of our method by testing
on four different testing datasets (Cityscapes, Dark Zurich, Nighttime Driving
and ACDC) without any finetuning, and demonstrate significant improvements
compared to the current state of the art on this problem. See project webpage
https://vobecant.github.io/DriveAndSegment/ for the code and more.
Users
Please
log in to take part in the discussion (add own reviews or comments).