Abstract
In augmented reality applications, consistent illumination between virtual and real objects is important for creating
an immersive user experience. Consistent illumination can be achieved by appropriate parameterisation of the
virtual illumination model, that is consistent with real-world lighting conditions. In this study, we developed a
method to reconstruct the general light direction from red-green-blue (RGB) images of real-world scenes using a
modified VGG-16 neural network. We reconstructed the general light direction as azimuth and elevation angles. To
avoid inaccurate results caused by coordinate uncertainty occurring at steep elevation angles, we further introduced
stereographically projected coordinates. Unlike recent deep-learning-based approaches for reconstructing the light
source direction, our approach does not require depth information and thus does not rely on special red-green-bluedepth (RGB-D) images as input
Users
Please
log in to take part in the discussion (add own reviews or comments).