ESA GNC Conference Papers Repository

Title:
Visual Navigation over Flat Terrain using Virtual Optical Flow
Authors:
Chernykh, V.; Beck, M.; Janschek, K.
Presented at:
Tralee 2008
DOI:
Full paper:
Abstract:

Future planetary landing missions will require pin-point landing capability at pre-selected location to fulfil the scientific goals and to improve the safety. Currently the planetary lander navigation is generally performed by determination of the orbit parameters before initiation of the descent and pure inertial navigation (integration of the acceleration and angular velocity data from IMU) after the de-orbit burn. Even if the orbit parameters have been determined with relatively high accuracy, error accumulation during the inertial navigation phase makes pin-point landing not possible (practically, the dimensions of a landing errors ellipse can reach tens or even hundreds of kilometres). Without GPS system and radio beacons the most promising option for improving the landing accuracy is visual navigation with respect to the planetary surface by processing of images from an onboard navigation camera. A number of visual navigation solutions have been already proposed and investigated. Feature tracking approaches [1, 2] are based on detection and tracking of the unknown surface features in the field of view of the navigation camera. The DIMES system has been successfully used for estimation of the spacecraft’s horizontal velocity with respect to the ground (using also the altimeter and IMU data) during the 2003 MER mission. Feature tracking approaches however can not provide the absolute position of the spacecraft, required for precise landing at pre-selected location. Landmarks matching approaches [3] are based on the detection (matching) of known landmarks in the navigation camera images. Absolute position and attitude (pose) of the spacecraft can be determined, if the landmarks positions within the image and their coordinates in the planet-fixed frame are known. Landmark matching approaches can provide absolute position data, required for pin-point landing. They require however constant visibility of at least 3 landmarks, suitable for reliable and accurate matching. The approaches, based on 3D terrain matching [4] also provide absolute pose data, however they are only applicable for terrain with significant (well pronounced) 3D relief. In this paper we present the concept of visual navigation over flat terrain, based on the analysis of the image motion pattern (optical flow) determined for pairs of real and virtual navigation camera images using an onboard optical correlator. The virtual camera image is generated from a reference image of the landing site using the initial (approximate) estimation of the spacecraft pose. The proposed approach can be used for the practical realization of a mono-camera visual navigation system for a planetary landing vehicle or a flying robot (UAV – unmanned aerial vehicle) operating over a smooth terrain in the frame of planetary exploration. The inherent high redundancy of the virtual optical flow determination ensures high accuracy and stability of the camera pose determination also under bad observation conditions. No special texture features (landmarks) are required, the reference images can be provided by a previous mapping mission (satellite observation). The proposed approach offers high robustness to perspective distortions and it is therefore in particular suitable for operation with wide angle and off-nadir-looking navigation cameras. The paper introduces the underlying methodological principles and it discusses computational realization aspects. Typical navigation performance figures, based on simulation experiments with high-fidelity software models of the optical correlator hardware and image/navigation processing algorithms, are presented for a planetary landing scenario. 1The performance analysis comprises navigation error estimation for simulated Moon and Mars landing missions as well as the analysis of the effect of the surface relief roughness on the system performance.