ESA GNC Conference Papers Repository
Lessons-learned from on-ground testing of image-based non-cooperative rendezvous navigation with visible-spectrum and thermal infrared cameras
ESA's Clean Space initiative aims at Active Debris Removal (ADR) for de-orbiting defunct satellites or pieces of space junk. This is done in a a rendezvous mission, but with a target that is un-cooperative. In order to perform the approach sequence several constraints have to be taken into account, especially in low Earth orbit (LEO) where the priority debris for removal is located (most of the times in sun-synchronous orbits). In the context of the Clean Space initiative, at the Institute of Automation at TU Dresden (TUD) the activity 'Image Recognition and Processing for Navigation' (IRPN) deals with the relative navigation between chaser spacecraft and the uncooperative target object using different complementary vision-based sensors (cameras in the visible-spectrum and infrared (IR) ones) and LIDAR. These sensors produce images (3-D cloud points in the case of the LIDAR) that shall be quickly analysed on-board and in real-time using image processing techniques (feature detection, target matching) and pose estimation algorithms (relative kinematic state estimation). The conditions for the target are very complex and challenging for those functions because it presents very reflective surface materials and, due to its geometry, there are some ambiguities in the target pose and motion state determination, particularly in the close range (some repeated geometric features on the surface that are undistinguishable if they were the only tracked elements). In contrast to vision-based navigation for landing missions, in an ADR scenario the image processing requirements are drastically different with very quick changes of the illumination conditions (due to target rotation and revolution around Earth). It is necessary to track complex shapes of the target debris with highly reflective materials and textures. All these conditions result in very demanding needs for the navigation in the ADR scenario, in particular for the image processing and the sensor suite and poses difficulties in the Verification and Validation scenarios that are defined in order to test these algorithms. For testing the image processing functions, synthetic (rendered) image data is a popular choice because it can nowadays be generated with many nice looking details even in short time and it always provides perfect ground truth. This makes possible to execute a large number of simulations. However, synthetic image data (specially for spacecraft in an in-orbit environment) still lacks realistic effects for reflections, glares, imperfections and/or (in the case of the IR images) temporal and spatial thermal variations. Real camera image data (collected in-orbit) is still unequalled with these effects. However, real space image data is rare and it is most of the times missing accurate ground truth data. Though, it is possible to simulate the approach of a chaser spacecraft towards a target object in an on-ground laboratory environment using off-the-shelf cameras and down-scaled mock-ups, but it is not straight forward to simulate realistically space environment and spacecraft materials with little effort. At TUD real image data was generated within the IRPN activity. With TUD's Spacecraft Rendezvous Simulator MiPOS image data from cameras for visible-spectrum and for thermal infrared spectrum has been acquired using different mock-ups. To get image data with realistic behaviour, it was necessary to cope with different challenges regarding the target mock-up heating and illumination, background visibility, etc. This paper describes the difficulties faced usually in this kind of on-ground testing (e.g. limited spatial range of the facilities, realistic and detailed simulation of the environment which makes a big difference for the image processing algorithms, etc) and explains the solutions and analyses that were provided in order to cope with these problems. Several mock-ups were built and used at different sizes and characteristics in order to fit them to the consecutive phases of the chaser approach to the target spacecraft. Hold points were defined for connecting accurately consecutive parts of the trajectories and refocusing the optics because of the scale of the models. Camera calibration (for visual and IR) was performed on an End-to-End assessment for checking the accuracy of the overall set-up. It was also necessary to keep a background with a difference in terms of temperature with respect to the model in order to simulate the contrast for the IR cameras between the spacecraft in orbit and the deep space. The paper includes all the lessons learnt and recommendations for further testing.