ESA GNC Conference Papers Repository

Title:
A minimal state augmentation algorithm for vision-based navigation without using mapped landmarks
Authors:
M. San Martin, D.S.B. Bayard, D.T.C. Conway, M.M. Mandic, E.S.B. Bailey
Presented at:
Salzburg 2017
DOI:
Full paper:
Abstract:

Space exploration missions are currently being formulated that require close-proximity operations in the vicinity of small bodies (i.e., comets and asteroids). Such close-proximity operations typically require real-time control technologies to enable touchdown, landing, and sampling operations. A commonly used choice of navigation sensors to support such operations consists of an IMU (Inertial Measurement Unit), a monocular camera, and an altimeter. In addition, a star tracker is typically available on most 3-axis stabilized exploration spacecraft. In order to make best use of the monocular camera, missions typically set aside time to survey the comet surface beforehand from a higher altitude. During this survey period, images are taken in order to map the surface and determine distinctive landmarks to be included on the on-board map used for autonomous vision-based navigation down to the surface. A problem that arises is that landmarks mapped from a considerable height lose their relevance as the vehicle gets closer to the comet surface. Specifically, features and image details obvious to a camera when up close would not be apparent or recorded in surveyed images, resulting in the potential failure to recognize the landmarks present in the on-board map. This lack of robustness when operating close to the surface is worrisome because it is when the vehicle is in the most danger, and exactly when reliability is needed most. An alternative approach is to generate features on the fly and in real time as the vehicle descends towards the surface. Most generally, this problem can be addressed using SLAM (Simultaneous Localization and Mapping) algorithms from the literature. Unfortunately, full SLAM solutions are not practical for many real-time applications due to their oversized filter state dimensions and large computational overhead. Specifically, SLAM approaches augment the Kalman Filter with 3 states for each of the N features observed, increasing the overall filter state order by 3*N. This leads to extremely high-order state vectors resulting in filters that are only marginally numerically stable and demand an unwieldy amount of on-board computation. This paper will describe MAVeN (Minimal State Augmentation Algorithm for Vision-Based Navigation), a new algorithm for vision-based navigation that estimates the position and velocity of a vehicle operating in close-proximity to a planetary or small body surface without the need of an on-board landmark map, but instead requiring a rough Digital Elevation Map (DEM) of such surface. MAVeN is based on a new conceptual approach that projects real-time observed features onto a shape model of the small body surface, as represented by an on-board DEM, in such a way that the filter is augmented with only 3 extra states rather than the 3*N states required for SLAM. In addition to the DEM, MAVeN requires knowledge of the spacecraft attitude relative to the small-body, as derived from the celestial attitude knowledge of the spacecraft (provided by the Star Tracker and IMU) and a model of the small body rotation, which are both available in these types of missions. Finally, MAVeN requires a starting knowledge of the spacecraft position and velocity relative to the small body as provided by ground navigation or high altitude on-board autonomous landmark based navigation, also a very reasonable assumption. This approach has been shown to significantly limit the growth of position and velocity errors over a pure IMU-based propagation scheme, while greatly reducing computation and improving robustness relative to competing approaches from the literature. This paper will describe the motivation and general architecture of the Navigation Filter, its mathematical derivation, and will present simulated performance results.