ESA GNC Conference Papers Repository

Dual technologies to mature vision based navigation: the example of space rendezvous and air to air refuelling
K.K. Kanani, C. Robin, A. Masson, R. Brochard, P. Duteis, R. Delage
Presented at:
Salzburg 2017
Full paper:

An ever increasing number of terrestrial and space applications require autonomy for the involved vehicles or robots. This is partly achieved through embedded vision-based navigation (VBN) modules. Indeed, the requirements for both terrestrial and space navigation share many similar characteristics, like the trajectory, the target to track, or the terrain to land on. As a consequence, VBN solutions developed for space can, most of the time, also be used on Earth, and vice versa. This greatly helps the maturation of VBN technologies, mainly because tests in real conditions are easier to carry on on Earth than in space. In this paper, we focus on one of these dual VBN technologies, which is the detection and tracking of a known but non- or semi-cooperative target. This technology is currently developed by Airbus Defence and Space to carry out space rendezvous and automatic air to air refuelling. Space rendezvous consists in approaching a space object (satellite, launcher stage, space moduleĀ…), to eventually dock onto or capture it. Air to air refuelling consists in docking the boom of a tanker aircraft with the receptacle of a receiver aircraft to be refuelled. The tanker boom is currently controlled by a human operator on-board the tanker aircraft. Our goal is to make the refuelling autonomous using vision sensors on-board the tanker. The space object to dock with, as well as the receiver to refuel, are considered to be a priori known, at least partially, and are non- or semi-cooperative (e.g. their pose may be limited to an a priori envelop, but no pose measurements are provided in real time to the chaser). In this paper, we'll describe in more detail the scenarios and challenges of autonomous space rendezvous and air to air refuelling, and point out the commonalities between both applications. We'll then present our model-based tracking, which relies upon LiDAR or camera images acquired by the 'chaser', as well as the 3D model of the target. Our tracking solution has been tested on real data, acquired during the LIRIS experiment mounted on ATV-5 during its rendezvous with ISS, and on F16 aircraft refuelled by an A310 MRTT. The performance of the tracking, in terms of accuracy, robustness and computing time will be presented for both applications. Finally a technological roadmap, considering the processing and the sensors, as well as the remaining challenges and future projects in which Airbus takes part will be exposed.