ESA GNC Conference Papers Repository
Hardware acceleration and performance assessment of image processing algorithms applicable to future ESA missions
In the scope of future exploration missions, as well as for missions in Earth's vicinity which require a particularly large degree of autonomy (such as rendezvous and docking, active debris removal and in-orbit servicing missions), autonomous optical or Lidar-based navigation is rapidly becoming a key enabling technology. The present work demonstrates how to achieve real-time operation of relevant image processing algorithms through hardware acceleration, by performing an end-to-end analysis, review, optimization, implementation of software-only and FPGA-accelerated versions, and performance assessment of key image processing algorithms applicable to optical navigation, using realistic images of several likely scenarios future ESA missions. We first review a broad group of key image processing techniques with the purpose of assessing their implementation feasiblity in a space-qualified chip containing CPU and co-processing FPGA components. An in-depth investigation is then carried out to characterize, and then find commonalities across all the key algorithms, with the purpose of identifying the functions which, if implemented in FPGA, would most contribute to the substantial acceleration of resource-hungry image processing algorithms likely to be involved in vision-based navigation - eventually making them amenable to real-time implementation on existing space-qualified hardware components.For each of the considered algorithms - object centroiding, limb+terminator detection, feature tracking, crater detection, terrain matching, lateral velocity estimation (for descent and landing), and hazard mapping (terrain illumination and roughness) - we then decribe an optimized, hybrid CPU+FPGA solution, which significantly reduces the computational burdens on CPU components, requires only moderate memory use, and provides a suitable detection performance. The performance of the software-only versions of the algorithms is finally assessed using a realistic camera simulator reproducing spacecraft, sky, terrain and whole planet images as they would be collected by a high performance sensor, under representative state dispersions and in four different scenarios: planetary/small body approach, small body navigation, descent and landing, and rendezvous and docking missions. In addition, we demonstrate the performance of hardware-accelerated versions of some of the algorithms (object centroiding, feature tracking and hazard mapping) which have been implemented until present.