Shallow depth-of-field is commonly used by photographers to isolate a subject
from a distracting background. However, standard cell phone cameras cannot
produce such images optically, as their short focal lengths and small apertures
capture nearly all-in-focus images. We present a system to computationally
synthesize shallow depth-of-field images with a single mobile camera and a
single button press. If the image is of a person, we use a person segmentation
network to separate the person and their accessories from the background. If
available, we also use dense dual-pixel auto-focus hardware, effectively a
2-sample light field with an approximately 1 millimeter baseline, to compute a
dense depth map. These two signals are combined and used to render a defocused
image. Our system can process a 5.4 megapixel image in 4 seconds on a mobile
phone, is fully automatic, and is robust enough to be used by non-experts. The
modular nature of our system allows it to degrade naturally in the absence of a
dual-pixel sensor or a human subject.
Description
[1806.04171] Synthetic Depth-of-Field with a Single-Camera Mobile Phone
%0 Generic
%1 wadhwa2018synthetic
%A Wadhwa, Neal
%A Garg, Rahul
%A Jacobs, David E.
%A Feldman, Bryan E.
%A Kanazawa, Nori
%A Carroll, Robert
%A Movshovitz-Attias, Yair
%A Barron, Jonathan T.
%A Pritch, Yael
%A Levoy, Marc
%D 2018
%K cv_bokehseminar
%R 10.1145/3197517.3201329
%T Synthetic Depth-of-Field with a Single-Camera Mobile Phone
%U http://arxiv.org/abs/1806.04171
%X Shallow depth-of-field is commonly used by photographers to isolate a subject
from a distracting background. However, standard cell phone cameras cannot
produce such images optically, as their short focal lengths and small apertures
capture nearly all-in-focus images. We present a system to computationally
synthesize shallow depth-of-field images with a single mobile camera and a
single button press. If the image is of a person, we use a person segmentation
network to separate the person and their accessories from the background. If
available, we also use dense dual-pixel auto-focus hardware, effectively a
2-sample light field with an approximately 1 millimeter baseline, to compute a
dense depth map. These two signals are combined and used to render a defocused
image. Our system can process a 5.4 megapixel image in 4 seconds on a mobile
phone, is fully automatic, and is robust enough to be used by non-experts. The
modular nature of our system allows it to degrade naturally in the absence of a
dual-pixel sensor or a human subject.
@misc{wadhwa2018synthetic,
abstract = {Shallow depth-of-field is commonly used by photographers to isolate a subject
from a distracting background. However, standard cell phone cameras cannot
produce such images optically, as their short focal lengths and small apertures
capture nearly all-in-focus images. We present a system to computationally
synthesize shallow depth-of-field images with a single mobile camera and a
single button press. If the image is of a person, we use a person segmentation
network to separate the person and their accessories from the background. If
available, we also use dense dual-pixel auto-focus hardware, effectively a
2-sample light field with an approximately 1 millimeter baseline, to compute a
dense depth map. These two signals are combined and used to render a defocused
image. Our system can process a 5.4 megapixel image in 4 seconds on a mobile
phone, is fully automatic, and is robust enough to be used by non-experts. The
modular nature of our system allows it to degrade naturally in the absence of a
dual-pixel sensor or a human subject.},
added-at = {2022-12-18T09:35:47.000+0100},
author = {Wadhwa, Neal and Garg, Rahul and Jacobs, David E. and Feldman, Bryan E. and Kanazawa, Nori and Carroll, Robert and Movshovitz-Attias, Yair and Barron, Jonathan T. and Pritch, Yael and Levoy, Marc},
biburl = {https://www.bibsonomy.org/bibtex/2538cdce4d6385c1820710f447ee7b389/t_seizinger},
description = {[1806.04171] Synthetic Depth-of-Field with a Single-Camera Mobile Phone},
doi = {10.1145/3197517.3201329},
interhash = {7f66a96c84f6810aa91823aef7843bda},
intrahash = {538cdce4d6385c1820710f447ee7b389},
keywords = {cv_bokehseminar},
note = {cite arxiv:1806.04171Comment: Accepted to SIGGRAPH 2018. Basis for Portrait Mode on Google Pixel 2 and Pixel 2 XL},
timestamp = {2022-12-18T09:35:47.000+0100},
title = {Synthetic Depth-of-Field with a Single-Camera Mobile Phone},
url = {http://arxiv.org/abs/1806.04171},
year = 2018
}