ESA GNC Conference Papers Repository

Title:
On the use of plenoptic imaging technology for 3D vision based relative navigation in space
Authors:
A. Hernández Delgado, N. Martínez Rey, J.P. Lüke, J.M. Rodríguez Ramos, M.Sánchez Gestido, M. Bullock
Presented at:
Salzburg 2017
DOI:
Full paper:
Abstract:

In the last years ESA started to develop cameras and image processing algorithms for relative navigation. Recent ESA studies target the use of passive vision-based and infrared devices, as well as new algorithms that use the angle of the line of sight to detect target motion across the field of view of a sensor. Given the latest advances in computational optics, time has come within ESA activities to investigate the use of plenoptic cameras for space applications in general. A light field camera (also known as plenoptic camera) captures information about the intensity of light in a scene and the direction that the light rays are travelling in space. More specifically the plenoptic function is parameterized by (X, Y, Z, ø, f, ?, t), where (X, Y, Z) is a point in space and (ø, f) are the angles of the rays that pass through that point. , ? and t are the wavelength and the time respectively. A first prototype of the plenoptic camera was developed by Adelson and Wang in 1992 and later improved by Ren Ng. But what these plenoptic cameras capture is not the full plenoptic function, since as stated by Adelson and Bergen, this is impossible. Instead, the camera captures a light field, which is reduced version of the plenoptic function, where the light rays in free space are parameterized by their impinging points into two parallel planes, so it is a 4D function (obviating wavelength and time). There are several methods for capturing light fields, which include: Camera arrays, spatial multiplexing (plenoptic cameras and others), temporal multiplexing, and frequency multiplexing. The first and the second are the more adequate for navigation, because they capture the light field in one shot and the computational effort for decoding the raw data to obtain a light field is moderate. Temporal multiplexing is only valid for static scenes, and frequency multiplexing techniques require more computations for decoding. Once a light field has been captured and decoded from raw data, the data contained in the light field can be used in several ways: light field rendering, which consists in generating images, for example at different focus distances or view points; depth from light field techniques, which consist in obtaining depth maps from the depth information encoded in the structure of the light field; finally, it is possible to use the plenoptic data contained in the light field for wavefront sensing. Also some examples of the use of light fields for navigation exist in the literature. Probably the main advantage of using a plenoptic camera is that it allows to obtain a multi-view image from the scene with a single lens camera, because a plenoptic camera is a conventional camera with a micro-lens array placed between the main lens and the sensor. We will discuss the main steps in the process of using these capture methods for relative navigation. First a general architecture for the software modules needed to process a light field in order to extract state vectors that are necessary for navigation will be presented. After that, specific options for the implementation will be revisited and, finally, potential limitations of the method will be evaluated. The issues discussed in this paper and that must be tackled when implementing a navigation system with a plenoptic camera are: Spatial-angular resolution trade-off: This is inherent to plenoptic cameras and does not arise with camera arrays. In a plenoptic camera images taken from slightly different view-points share the same sensor, so the bigger each image is, the lesser of images can be taken and vice versa. This must be taken into consideration when designing a specific plenoptic camera system. Field of view (FOV) and 3D capability trade-off: In the case of the plenoptic camera, in order to achieve refocusing capability and 3D measurements, the focal length must be increased which results in a reduced field of view. On the other hand, to get tracking algorithms to work properly the field of view should be large enough. The trade-off between FOV and 3D capability is discussed in more detail in the paper. Depth estimation accuracy with respect to camera system design parameters: Image processing algorithms estimate disparities with a certain accuracy (~ 0.1 pixels). Later the disparity values are mapped to depth values with a function that also depends on the design parameters of the camera system. These parameters can be fine-tuned to reduce the errors in depth estimations, but this will influence other parts of the system as for example field of view. Propagation of matching and depth estimation errors to state vectors: The result of depth estimation is a 3D point cloud or a depth map, which is later used to compute the state vectors for navigation. Noise in the 3D data has an influence on the state vectors and it will be related with state vectors in order to show their relation. Bandwidth and computational power limitations: Since light fields are conformed by a considerably greater amount of data, which have to be processed with enough speed, to get satisfactory results in time, a trade-off between the amount of data captured and the computational effort done to produce the state vectors is considered. Influence of changes in illumination: This issue is especially important for navigation in space, where illumination changes dramatically between sunlight, umbra and penumbra. Additionally, other related issues, as the reflections of MLI in the case of rendezvous operations must be considered. These issues do not arise separately, a change in one of the system design parameters can modify the behavior of several of the mentioned issues, we will show how do they relate and how they influence each other in order to find a trade-off in order to assess this promising technology for two test scenarios: Uncooperative rendezvous for Active Debris Removal (ADR) and moon landing.