@t_seizinger

Synthetic Depth-of-Field with a Single-Camera Mobile Phone

, , , , , , , , , and . (2018)cite arxiv:1806.04171Comment: Accepted to SIGGRAPH 2018. Basis for Portrait Mode on Google Pixel 2 and Pixel 2 XL.
DOI: 10.1145/3197517.3201329

Abstract

Shallow depth-of-field is commonly used by photographers to isolate a subject from a distracting background. However, standard cell phone cameras cannot produce such images optically, as their short focal lengths and small apertures capture nearly all-in-focus images. We present a system to computationally synthesize shallow depth-of-field images with a single mobile camera and a single button press. If the image is of a person, we use a person segmentation network to separate the person and their accessories from the background. If available, we also use dense dual-pixel auto-focus hardware, effectively a 2-sample light field with an approximately 1 millimeter baseline, to compute a dense depth map. These two signals are combined and used to render a defocused image. Our system can process a 5.4 megapixel image in 4 seconds on a mobile phone, is fully automatic, and is robust enough to be used by non-experts. The modular nature of our system allows it to degrade naturally in the absence of a dual-pixel sensor or a human subject.

Description

[1806.04171] Synthetic Depth-of-Field with a Single-Camera Mobile Phone

Links and resources

Tags

community

  • @dblp
  • @t_seizinger
@t_seizinger's tags highlighted