SplaTAM: Splat, Track & Map 3D Gaussians
for Dense RGB-D SLAM

1Carnegie Mellon University, 2Massachusetts Institute of Technology

SplaTAM enables precise camera tracking and high-fidelity reconstruction
in challenging real-world scenarios.

Abstract

Dense simultaneous localization and mapping (SLAM) is pivotal for embodied scene understanding. Recent work has shown that 3D Gaussians enable high-quality reconstruction and real-time rendering of scenes using multiple posed cameras. In this light, we show for the first time that representing a scene by a 3D Gaussian Splatting radiance field can enable dense SLAM using a single unposed monocular RGB-D camera. Our method, SplaTAM, addresses the limitations of prior radiance field-based representations, including fast rendering and optimization, the ability to determine if areas have been previously mapped, and structured map expansion by adding more Gaussians. In particular, we employ an online tracking and mapping pipeline while tailoring it to specifically use an underlying Gaussian representation and silhouette-guided optimization via differentiable rendering. Extensive experiments on simulated and real-world data show that SplaTAM achieves up to 2 X state-of-the-art performance in camera pose estimation, map construction, and novel-view synthesis, demonstrating its superiority over existing approaches.

Video

ScanNet++ S2

Online Reconstruction

3D Novel View Loop

Online iPhone Reconstructions

Replica Room 0 Novel View Renderings

Nice-SLAM

Point-SLAM

SplaTAM

Note: Nice-SLAM & Point-SLAM use ground truth novel view depth for rendering.

Replica Office 1 Novel View Renderings

Nice-SLAM

Point-SLAM

SplaTAM

Note: Nice-SLAM & Point-SLAM use ground truth novel view depth for rendering.

Camera Tracking Optimization

ScanNet++ S1

ScanNet++ S2

Replica R0

BibTeX


      @article{keetha2023splatam,
        title={SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAM},
        author={Keetha, Nikhil and Karhade, Jay and Jatavallabhula, Krishna Murthy and Yang, Gengshan and Scherer, Sebastian and Ramanan, Deva and Luiten, Jonathon},
        journal={arXiv preprint},
        year={2023}
      }