Sketch-based modeling with a differentiable renderer

Nan Xiang, Ruibin Wang, Tao Jiang, Li Wang, Yanran Li, Xiaosong Yang*, Jianjun Zhang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)

Abstract

Sketch-based modeling aims to recover three-dimensional (3D) shape from two-dimensional line drawings. However, due to the sparsity and ambiguity of the sketch, it is extremely challenging for computers to interpret line drawings of physical objects. Most conventional systems are restricted to specific scenarios such as recovering for specific shapes, which are not conducive to generalize. Recent progress of deep learning methods have sparked new ideas for solving computer vision and pattern recognition issues. In this work, we present an end-to-end learning framework to predict 3D shape from line drawings. Our approach is based on a two-steps strategy, it converts the sketch image to its normal image, then recover the 3D shape subsequently. A differentiable renderer is proposed and incorporated into this framework, it allows the integration of the rendering pipeline with neural networks. Experimental results show our method outperforms the state-of-art, which demonstrates that our framework is able to cope with the challenges in single sketch-based 3D shape modeling.

Original languageEnglish
Article numbere1939
JournalComputer Animation and Virtual Worlds
Volume31
Issue number4-5
DOIs
Publication statusPublished - 1 Jul 2020
Externally publishedYes

Keywords

  • deep learning
  • shape prediction
  • sketch-based modeling

Cite this