TY - GEN
T1 - Learning Implicit Surface Light Fields
AU - Oechsle, Michael
AU - Niemeyer, Michael
AU - Reiser, Christian
AU - Mescheder, Lars
AU - Strauss, Thilo
AU - Geiger, Andreas
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/11
Y1 - 2020/11
N2 - Implicit representations of 3D objects have recently achieved impressive results on learning-based 3D reconstruction tasks. While existing works use simple texture models to represent object appearance, photo-realistic image synthesis requires reasoning about the complex interplay of light, geometry and surface properties. In this work, we propose a novel implicit representation for capturing the visual appearance of an object in terms of its surface light field. In contrast to existing representations, our implicit model represents surface light fields in a continuous fashion and independent of the geometry. Moreover, we condition the surface light field with respect to the location and color of a small light source. Compared to traditional surface light field models, this allows us to manipulate the light source and relight the object using environment maps. We further demonstrate the capabilities of our model to predict the visual appearance of an unseen object from a single real RGB image and corresponding 3D shape information. As evidenced by our experiments, our model is able to infer rich visual appearance including shadows and specular reflections. Finally, we show that the proposed representation can be embedded into a variational auto-encoder for generating novel appearances that conform to the specified illumination conditions.
AB - Implicit representations of 3D objects have recently achieved impressive results on learning-based 3D reconstruction tasks. While existing works use simple texture models to represent object appearance, photo-realistic image synthesis requires reasoning about the complex interplay of light, geometry and surface properties. In this work, we propose a novel implicit representation for capturing the visual appearance of an object in terms of its surface light field. In contrast to existing representations, our implicit model represents surface light fields in a continuous fashion and independent of the geometry. Moreover, we condition the surface light field with respect to the location and color of a small light source. Compared to traditional surface light field models, this allows us to manipulate the light source and relight the object using environment maps. We further demonstrate the capabilities of our model to predict the visual appearance of an unseen object from a single real RGB image and corresponding 3D shape information. As evidenced by our experiments, our model is able to infer rich visual appearance including shadows and specular reflections. Finally, we show that the proposed representation can be embedded into a variational auto-encoder for generating novel appearances that conform to the specified illumination conditions.
KW - 3D Deep Learning
KW - Appearance modelling
KW - Implicit Functions
KW - Novel View Synthesis
UR - http://www.scopus.com/inward/record.url?scp=85098608527&partnerID=8YFLogxK
U2 - 10.1109/3DV50981.2020.00055
DO - 10.1109/3DV50981.2020.00055
M3 - Conference Proceeding
AN - SCOPUS:85098608527
T3 - Proceedings - 2020 International Conference on 3D Vision, 3DV 2020
SP - 452
EP - 462
BT - Proceedings - 2020 International Conference on 3D Vision, 3DV 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 8th International Conference on 3D Vision, 3DV 2020
Y2 - 25 November 2020 through 28 November 2020
ER -