ESA GNC Conference Papers Repository

Artificial Intelligence for Terrain Relative Navigation in Unknown Environments
Marcos Avilés, Maciej Quoos, Jesús Gil Fernández, Julia Wajoras
Presented at:
Sopot 2023
Full paper:

Space missions benefit greatly from the capability of the on-board GNC system to adapt rapidly to unknown environment. Autonomous vision-based navigation is a particular technology under implementation in several missions. One of the most interesting applications in that field is the proximity operations around a small asteroid, like those in HERA. The focus of this work was to develop a navigation algorithm with the capability to fly over an unknown terrain and achieve better navigation performances than current vision-based techniques based on unknown feature tracking. More specifically, a navigation architecture was designed using techniques for Image Processing (IP) and integrated within a navigation filter to perform the data fusion of the IP output and the rest of the GNC sensors. Limited a priori knowledge of the terrain was assumed (a coarse shape model of the target) to demonstrate the generalization of performances (different distances, surface characteristics, viewing orientation). Two different IP approaches were explored. The first one, using conventional IP techniques and the second one, using more recent Artificial Intelligence (AI) techniques based on Convolution Neural Networks. The architecture was designed in such a way that the core IP functions could be switched between approaches for higher flexibility and to allow comparing the differences at function level, entire IP level and navigation level. The training data for the AI method was pixel-wise correspondences from different images. One possible approach to obtain such information was to run a classical feature extraction technique and use those correspondences. However, with such method we would not be exploiting the potential of the neural network as we would be just making it learn how classical methods work. The solution was to provide the neural network with images, the camera poses corresponding to those images and quite importantly, the camera intrinsics and the depth maps associated with the viewpoints. With this information, the system could warp points detected in a training image to another one overlooking the same structure. This way the network could learn not only how the appearance of points change with the relative position of the camera, but also how the appearance changed with the illumination. The knowledge about the asteroid was assumed to be limited to its approximate geometric shape, with the nature and characteristics of its surface not being completely known and adding uncertainty. The approach we took was to aim for a generic system, through training on a big population of randomly generated samples with the same characteristics of the estimated surface of the asteroid. More specifically, given that the most notable features to be expected in an asteroid are craters and boulders, two different models were used for the training, one with a more crater-based appearance and another with a rocky-based appearance. The training was also done covering multiple illumination conditions, viewing directions and distances to the asteroid. The reference scenario was the Very Close Fly-By phase of the HERA mission (although none of the techniques studied were designed for this particular mission) and the performances of the new proposed navigation were compared against the nominal navigation of HERA also exploring contingency cases where no altimeter is present. The validation of the performances of the autonomous navigation system was performed in a high-fidelity MIL testbench considering the representative proximity operations of HERA during very low altitude fly-bys, but also in a reduced-scale mock-up using real images acquired at a HW facility.