GeoSVR: Taming Sparse Voxels for Geometrically Accurate Surface Reconstruction

1Beihang University, 2Rawmantic AI, 3Macquarie University, 4RIKEN AIP, 5The University of Tokyo
NeurIPS 2025 Spotlight

GeoSVR presents an explicit voxel-based surface reconstruction framework, exploring and extending the under-investigated potential of sparse voxels to achieve accurate, detailed, and complete surface reconstruction with high efficiency.

Abstract

In this paper, we introduce Geometric Sparse-Voxel Reconstruction, abbreviated as GeoSVR, an explicit voxel-based framework that explores and extends the under-investigated potential of sparse voxels for achieving accurate, detailed, and complete surface reconstruction.

To ensure correct scene convergence, we first propose a Voxel-Uncertainty Depth Constraint that maximizes the effect of monocular depth cues while presenting a voxel-oriented uncertainty to avoid quality degradation, enabling effective and robust scene constraints yet preserving highly accurate geometries. Subsequently, Sparse Voxel Surface Regularization is designed to enhance geometric consistency for tiny voxels and facilitate the voxel-based formation of sharp and accurate surfaces.

Extensive experiments demonstrate our superior performance compared to existing methods across diverse challenging scenarios, excelling in geometric accuracy, detail preservation, and reconstruction completeness while maintaining high efficiency.

Video

Reconstruction on DTU

Comparisons

Complete, Accurate and Smooth Surface Reconstruction

High-Fidelity Reconstruction for Intricate Real-World Scene

Mesh on Tanks and Temples Dataset

Acknowledgements

This method is developed on the excellent open-source projects SVRaster and Gaussian Splatting Regularizations are implemented with the help of Depth-Anything-V2, DNGaussian, PGSR, and Geo-Neus. Thanks for their great contributions.

BibTeX

@article{li2025geosvr,
    title={GeoSVR: Taming Sparse Voxels for Geometrically Accurate Surface Reconstruction},
    author={Li, Jiahe and Zhang, Jiawei and Zhang, Youmin and Bai, Xiao and Zheng, Jin and Yu, Xiaohan and Gu, Lin},
    journal={Advances in Neural Information Processing Systems},
    year={2025}
}