SplaTAM: Splat, Track & Map 3D Gaussians
for Dense RGB-D SLAM

CVPR 2024

1Carnegie Mellon University, 2Massachusetts Institute of Technology

SplaTAM enables precise camera tracking and high-fidelity reconstruction
in challenging real-world scenarios.

Abstract

Dense simultaneous localization and mapping (SLAM) is crucial for robotics and augmented reality applications. However, current methods are often hampered by the non-volumetric or implicit way they represent a scene. This work introduces SplaTAM, an approach that, for the first time, leverages explicit volumetric representations, i.e., 3D Gaussians, to enable high-fidelity reconstruction from a single unposed RGB-D camera, surpassing the capabilities of existing methods. SplaTAM employs a simple online tracking and mapping system tailored to the underlying Gaussian representation. It utilizes a silhouette mask to elegantly capture the presence of scene density. This combination enables several benefits over prior representations, including fast rendering and dense optimization, quickly determining if areas have been previously mapped, and structured map expansion by adding more Gaussians. Extensive experiments show that SplaTAM achieves up to 2x superior performance in camera pose estimation, map construction, and novel-view synthesis over existing methods, paving the way for more immersive high-fidelity SLAM applications.

Video

ScanNet++ S2

Online Reconstruction

3D Novel View Loop

Online iPhone Reconstructions

Replica Room 0 Novel View Renderings

Nice-SLAM

Point-SLAM

SplaTAM

Note: Nice-SLAM & Point-SLAM use ground truth novel view depth for rendering.

Replica Office 1 Novel View Renderings

Nice-SLAM

Point-SLAM

SplaTAM

Note: Nice-SLAM & Point-SLAM use ground truth novel view depth for rendering.

Camera Tracking Optimization

ScanNet++ S1

ScanNet++ S2

Replica R0

Concurrent work

Given the fast pace of research these days, five concurrent SLAM papers using 3D Gaussians as the underlying representation showed up on arXiv. Surprisingly, each one had a unique way to do SLAM with 3D Gaussians.

GS-SLAM does coarse to fine camera tracking based on sparse selection of Gaussians.

Gaussian Splatting SLAM does monocular SLAM, where densification is performed using depth statistics.

Photo-SLAM couples ORB-SLAM3 based camera tracking with 3DGS based mapping.

COLMAP-Free 3DGS uses monocular depth estimation with 3DGS.

Gaussian-SLAM couples DROID-SLAM based camera tracking with active & inactive 3DGS sub-maps.

BibTeX


      @inproceedings{keetha2024splatam,
        title={SplaTAM: Splat, Track \& Map 3D Gaussians for Dense RGB-D SLAM},
        author={Keetha, Nikhil and Karhade, Jay and Jatavallabhula, Krishna Murthy and Yang, Gengshan and Scherer, Sebastian and Ramanan, Deva and Luiten, Jonathon},
        booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
        year={2024}
      }