MIMOSA: A Multi-Modal SLAM Framework for Resilient Autonomy against Sensor Degradation
Chapter
Submitted version
Permanent lenke
https://hdl.handle.net/11250/3047985Utgivelsesdato
2022Metadata
Vis full innførselSamlinger
Originalversjon
10.1109/IROS47612.2022.9981108Sammendrag
This paper presents a framework for Multi-Modal SLAM (MIMOSA) that utilizes a nonlinear factor graph as the underlying representation to provide loosely-coupled fusion of any number of sensing modalities. Tailored to the goal of enabling resilient robotic autonomy in GPS-denied and perceptually-degraded environments, MIMOSA currently contains modules for pointcloud registration, fusion of multiple odometry estimates relying on visible-light and thermal vision, as well as inertial measurement propagation. A flexible backend utilizes the estimates from various modalities as relative transformation factors. The method is designed to be robust to degeneracy through the maintenance and tracking of modalityspecific health metrics, while also being inherently tolerant to sensor failure. We detail this framework alongside our implementation for handling high-rate asynchronous sensor measurements and evaluate its performance on data from autonomous subterranean robotic exploration missions using legged and aerial robots. MIMOSA: A Multi-Modal SLAM Framework for Resilient Autonomy against Sensor Degradation