This paper introduces DNGaussian, a depth-regularized framework based on 3D Gaussian radiance fields, offering real-time and high-quality few-shot novel view synthesis at low costs.
Our study reveals two inherent problems for regularizing 3DGS via depth information: 1) Depth should be targetedly used to constrain partial parameters, rather than the entire model as in previous NeRF approaches. 2) A traditional fixed-scale depth loss function can not provide sufficient regularization to Gaussians for geometry learning. By analyzing and solving these two problems, DNGaussian achieves outstanding performance.
Extensive experiments on LLFF, DTU, and Blender datasets demonstrate that DNGaussian outperforms state-of-the-art methods, achieving comparable or better results with significantly reduced memory cost, a 25x reduction in training time, and over 3000x faster rendering speed.
Our framework starts from a random initialization and consists of a Color Supervision module and a Depth Regularization module. In the depth regularization, we render a Hard Depth and a Soft Depth for the input view, and separately calculate the losses of the pre-generated monocular depth map with the proposed Global-Local Depth Normalization. Finally, the output Gaussian field enables efficient and high-quality novel view synthesis.
Comarison with current SOTA baselines. Zoom in for better visualization.
@article{li2024dngaussian,
title={DNGaussian: Optimizing Sparse-View 3D Gaussian Radiance Fields with Global-Local Depth Normalization},
author={Jiahe Li and Jiawei Zhang and Xiao Bai and Jin Zheng and Xin Ning and Jun Zhou and Lin Gu},
journal={arXiv preprint arXiv:2403.06912},
year={2024}
}