Real-Time Volume-Rendering Image Denoising Based on Spatiotemporal Weighted Kernel Prediction

  • Xinran Xu
  • , Chunxiao Xu
  • , Lingxiao Zhao*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Volumetric Path Tracing (VPT) based on Monte Carlo (MC) sampling often requires numerous samples for high-quality images, but real-time applications limit samples to maintain interaction rates, leading to significant noise. Traditional real-time denoising methods use radiance and geometric features as neural network inputs, but lightweight networks struggle with temporal stability and complex mapping relationships, causing blurry results. To address these issues, a spatiotemporal lightweight neural network is proposed to enhance the denoising performance of VPT-rendered images with low samples per pixel. First, the reprojection technique was employed to obtain features from historical frames. Next, a dual-input convolutional neural network architecture was designed to predict filtering kernels. Radiance and geometric features were encoded independently. The encoding of geometric features guided the pixel-wise fitting of radiance feature filters. Finally, learned weight filtering kernels were applied to images’ spatiotemporal filtering to produce denoised results. The experimental results across multiple denoising datasets demonstrate that this approach outperformed the baseline models in terms of feature extraction and detail representation capabilities while effectively suppressing noise with superior performance and enhanced temporal stability.
Original languageEnglish
Article number126
JournalJournal of Imaging
Volume11
Issue number4
DOIs
Publication statusPublished - 21 Apr 2025

Keywords

  • Ray tracing
  • Volume rendering image denoising
  • Realistic volume rendering

Fingerprint

Dive into the research topics of 'Real-Time Volume-Rendering Image Denoising Based on Spatiotemporal Weighted Kernel Prediction'. Together they form a unique fingerprint.

Cite this