Abstract

This paper addresses the problem of reconstruction of a monochromatic light field from data points, irregularly distributed within a volume of interest. Such setting is relevant for a wide range of three-dimensional display and beam shaping applications, which deal with physically inconsistent data. Two finite-dimensional models of monochromatic light fields are used to state the reconstruction problem as regularized matrix inversion. The Tikhonov method, implemented by the iterative algorithm of conjugate gradients, is used for regularization. Estimates of the model dimensionality are related to the number of degrees of freedom of the light field as to show how to control the data redundancy. Experiments demonstrate that various data point distributions lead to ill-poseness and that regularized inversion is able to compensate for the data point inconsistencies with good numerical performance.

1. Introduction

Many optical applications require a generation and control of light fields. Digital processing by computers is an attractive way of implementing such operations as it overcomes possible physical limitations of analog devices. However, signal processing has to be suitably coupled with the otherwise naturally continuous optical signals. This coupling requires effective discrete representation of the continuous functions, associated with the light fields. From one point of view, the discrete representation of a light field should preserve the degrees of freedom of the continuous model which describes the physical properties of the field. From another point of view, the discrete representation should admit the requirements on the light field which are imposed by the application. Three-dimensional (3D) imaging deals with the reconstruction of captured optical signals by digital means as CCD arrays and recreation of synthetic or computer generated, 3D data by holographic means [1]. Light beam shaping requires the reconstruction or synthesis of light beams which maintain certain properties along their propagation [2].

This paper addresses the problem of monochromatic light field reconstruction from ensemble of data samples with free positions, distributed within a volumetric region of interest. Setting the light field specification at a nonuniform irregular grid of points is rather general and can be utilized in a wide range of applications. For example, computer generated holography [3] reconstructs a light field to approximate a synthetic or a captured object, given its 3D abstract model—a point cloud, mesh, NURBS, and so forth [4]. The data provided by such 3D models can be hosted conveniently by an irregular grid. Limited aperture and resolution in digital holography are remedied by multiple CCD recordings in a single or different planes [5, 6]. These are prone to distortions originating from not precisely known CCD measurement locations which can be handled by an irregular grid of points. Reconstruction of a light field in terms of light field generators is required to drive properly a light modulation system for an eventual light field synthesis. For example, a color holographic 3D display system drives three spatial light modulators—one for each red, green and blue color channel [7].

Another advantage of specifying the application constraints as an ensemble of irregularly distributed samples is the control on the input data redundancy. According to the Rayleigh-Sommerfeld diffraction integral, the light field on a certain plane is sufficient to compute the wave field at any point within the volume of interest [8]. Therefore, a regular grid of samples at a certain transversal plane should be sufficient to reconstruct the whole field. Gori [9] and Onural [10] have derived conditions for the sampling rate when the support of the field at any Fresnel plane is limited, considering the cases when the diffraction field originates from strictly band limited or strictly space limited function. In the general case of a function which is essentially limited both in space and frequency domain, the amount of needed samples should not be higher than the degrees of freedom of the field, measured by the finite space-bandwidth product. In this case the conditions from [9, 10] require redundant amount of samples, as the spatial spread of the field extends upon propagation along the longitudinal direction , while the frequency support remains the same. An adhoc approach as in [11] and a strict derivation as in [12, 13] use information about local bandwidth to reach the number of degrees of freedom by the amount of needed samples at any transversal plane along . Early works [3, 14] use some of these results to specify the object on a plane, loosing the volumetric opportunity. Some recent works commonly specify the volume of interest on an ensemble of equally spaced planes, at a regular grid of points at each plane, often oversampled [2, 1519]. However, such a 3D grid is still redundant with respect to the number of degrees of freedom of the diffraction field. Moreover, a fixed 3D grid might not fit well to an application-driven data point distribution. In this sense, an irregular grid of points not only fits any potential application but also opens up opportunities for volumetric specifications and data redundancy control.

In any application, the specifications for the light field are not guaranteed to be physically consistent with a light wave. Digital recording of holograms include CCD noise, misalignment, finite aperture and resolution. Light beam shaping and computer generated holography specify complex structures with properties, unlikely to be exhibited by a light wave. The mainstream of articles on light field synthesis treats the data specification inconsistencies by various iterative projection methods, derived on the base of the Gerchberg-Saxton algorithm [20]. Volumetric specifications usually define one constraint set per plane, assign weights and tolerances to the sets and use parallel or serial projections to reach an optimized solution [2, 6, 18, 19, 21, 22]. Other methods define the constraint sets on Ewald's surface [17] or fractional Fourier domain [23]. Alternative approaches to treat the data inconsistency are based on genetic algorithms with application to beam shaping [24, 25]. In our work we employ two finite-dimensional models for diffraction which represent a light field by a linear combination of generating functions [26, 27]. In these models, the inconsistencies of the specified data points can be represented by an additive Gaussian term. While being simple, such a representation is quite general and encompasses a wide range of distortions.

The light field reconstruction problem can be cast as an inverse problem [28] to find the coefficients of the finite-dimensional models from the specified data points [26, 27, 29]. The focus of this paper is on the characteristics of the input data specification which influence the stability of the finite-dimensional models for light field reconstruction. The spectrum of each model is analyzed in various scenarios depending on the size of the volume of interest, clustering of the data points and amount of input data, as these are important for practical applications. As far as the mathematical approach is concerned, the Tikhonov regularization [28] is used with both models to provide an approximate solution which fits the specified data points. The singular value decomposition- (SVD-) based implementation of Tikhonov regularization is used to illustrate and analyze the limits of the reconstruction. The regularization parameters are determined by Morozov's discrepancy principle. The matrix-free Tikhonov regularization is suggested as practical approach for reconstruction. It iteratively finds a regularized solution based on the method of conjugate gradients (CGs) [30]. The convergence rate and final reconstruction error for various scenarios are presented to assess this approach.

The paper is organized as follows. Section 2 introduces the basics of diffraction and notations related to the considered light fields. Section 3 presents the finite-dimensional models used to represent the field, and discusses their properties with focus on dimensionality issues. Section 4 states the observation model in a matrix form, based on the irregularly distributed inconsistent data samples. Regularized solutions used for approximating the input data are presented in Section 5. Section 6 illustrates the performance of the suggested reconstruction methods in a number of guided experiments.

2. Basics of Diffraction

Consider a light field , generated by a monochromatic light wave, which propagates in a linear, isotropic and homogeneous media. Under such conditions, the spatial distribution of the complex amplitude satisfies the homogeneous Helmholtz wave equation [8]: where is the wave number of the monochromatic light. Such waves, emerging from an optical system, satisfy the Sommerfeld radiation condition and are accurately described by the Rayleigh-Sommerfeld diffraction integral [8]. It relates the light field at any point to that on a “reference” plane at in a linear and shift-invariant relationship. However, the practical use of this relationship for computations is limited as it requires very high sampling rate [31]. Therefore, frequency domain alternatives of the Rayleigh-Sommerfeld diffraction integral are preferred [2, 22, 31].

For the sake of simplicity, the discussion is restricted to one transverse dimension () only, resulting in a two-dimensional (2D) scalar function describing the light field. Note that the interesting dimension for the scope the paper is the longitudinal one () as it keeps the volumetric properties of the light field. Generalizations to the three dimensional case are straightforward and mentioned whenever necessary. The function can be represented by its 2D Fourier transform : where and are the spatial frequencies along and directions, respectively. Applying the Fourier transform on (1) and using the derivative property of the Fourier transform, one ends up with the relation . Hence, can be nonzero only on the circle with radius :

3. Finite-Dimensional Models of Diffraction

3.1. Bessel-Fourier (B-F) Generators

As the spatial frequencies and are limited only to a circle, the Fourier transform is in fact one-dimensional function: where the frequencies are expressed in polar form as , with and . Upon a change of the spatial coordinates and in polar form as and , the Fourier transform in (2) becomes The function is -periodic. Consequently, it can be described by the complex Fourier series as . Inserting this into (5) and changing the order of summation and integration, the field can be expressed as where is the th order Bessel function of the first kind [32] which solves the remaining integral under the summation. Equation (6) assumes that the Fourier series representation of has been truncated to nonzero coefficients to arrive at finite representation. Detailed derivation of the model can be found in [26]. This model describes the field continuously at any spatial point as a superposition of a discrete set of basis functions , which are mutually orthogonal and separable. Hence, the discrete set of scaling coefficients describes completely the continuous field. Yet, no explicit discretization was done during the derivation.

The model of (6) can be generalized to three dimensions [33]. In 3D, the Fourier transform is supported on a sphere with the same radius and therefore it is two-dimensional. Such a function can be expressed as a superposition of spherical harmonics, instead of the cylindrical harmonics . In radial direction the Bessel functions are substituted by the spherical Bessel functions of the first kind.

3.2. Fourier Generators

Another frequency domain representation of the diffraction field is based on a decomposition in plane waves : where is the 1D Fourier transform of the field restricted only to the initial line at . The derivation of this integral is based on solving the Helmholtz equation for the 1D Fourier transform of the field restricted to a line at a distance , given as initial condition. The plane waves propagate only in positive direction as the frequencies assume only positive sign. This corresponds to taking only half of the circle for .

As is limited within , the function is band limited. It can be further assumed to be essentially space limited or, more precisely, to have a finite space-bandwidth product. Such a function can be periodized with period equal to the transversal extent of a spatial region of interest. This is equivalent to discretization of . A periodic and band limited function is equivalently represented by a finite number of discrete spectral components at values of the frequency . Using these values of in (7) leads to a finite-dimensional model of diffraction: where are the coefficients of the Fourier series expansion of . This discrete, Fourier generators-based, model describes the field continuously at any spatial point as a superposition of the basis functions .

The generalization of the model in (8) is straightforward. Another transversal dimension brings an extra term in the plane waves [8]. Now the 2D function must be assumed space limited also along , as discretization of corresponds to periodization along .

3.3. Dimensionality of the Models and Field Characteristics

The derived models involve a finite number of basis functions in a linear combination to build up a light field. The amount of the nonzero weighting coefficients determine the dimensionality of any forward or inverse problem related with computation or reconstruction of a light field. Therefore, it is important to relate this amount to the physical properties of the approximating field. The specified region of interest and target detail level can be naturally related to the spatial and frequency content of the field, that is, to the number of degrees of freedom of the field. As the field can be completely described by the function on the initial line , its number of degrees of freedom is equal to the degrees of freedom of . The number of degrees of freedom is a set of numbers which describe it completely. In terms of the Wigner distribution, this is the area in the time-frequency plane under which the Wigner distribution of the function is essentially nonzero [34]. This can be measured by the space-bandwidth product between the spatial and frequency extent: where the factor renormalizes the angular frequency to an ordinary one. The number can be related to the number of independent Gabor atoms with unit space-bandwidth product in the Gabor representation of . Alternatively, if is sampled uniformly along according to the Nyquist rate , then becomes the number of essentially nonzero samples from which the continuous function can be reconstructed. In practice, can often be estimated from the application at hand. For example, it can be related to the physical properties of a sensing device. For synthetic data and other applications, reasonable assumptions about the desired detail level of the target can be used.

3.3.1. Fourier Generators

The amount of required coefficients for this model is directly related with the bandwidth of : Equation (10) suggests that is directly proportional to . This result is expected as the Fourier generators discretize the frequency band of . The proportionality coefficient defines the excess which needs to be taken. The period must be larger than so that the periodic replicas of do not overlap. However, choosing close to does not guarantee that an overlap will not occur on a line at further distance . Discrete frequency axis defines discrete Fourier spectrum of at any line , keeping periodic with the same period . On the other hand, spatially limited pattern tends to spread its transversal support when propagated along . Therefore, given maximal distance according to the specified region of interest, one must ensure that , where is the support of . The relative increase of this spatial support is smaller (closer to 1) for larger at fixed distance .

3.3.2. Bessel-Fourier Generators

The coefficients in the Bessel-Fourier model are the Fourier series coefficients of . Their amount can be determined from the frequency support of . Direct comparison of (2) and (7) yields Hence, the frequency support of coincides with the frequency support of and can be used to estimate the number .

The duality property of the Fourier transform can be used to obtain the frequency support of as . However, the frequency support of with respect to is not the same, even though it is related to . The highest frequency in the Fourier transform of is —the same as the frequency of the harmonic . Therefore, the frequency support of can be used to estimate the frequency support of . In communication theory, such a harmonic function is recognized as a special case of frequency modulated signal . Its frequency support is approximated as according to the Carson's rule [35]. The frequency support of and and ,respectively, is estimated as . Hence, the dimensionality of the Bessel-Fourier model required to describe a field with transversal support on the reference line is: This result suggests that the Bessel-Fourier model requires small amount of generators when the spatial support of the field is comparable to the wavelength. From another point of view, if is assumed to be fixed, then the excess in compared to is determined by the ratio between and . In fact, is the size of the finest detail structure in .

4. Irregularly Sampled Diffraction Fields

An irregular grid is specified as the set of sampling points , or, in polar coordinates , with the correspondence and . The sampled field is specified as the sample values . The light field reconstruction problem is to find the unknown field-generating coefficients or , given the irregularly distributed samples . Equations (6) and (8) can be written for each point in the irregular sampling set to obtain for . Each of these sets of equations forms a linear system for the unknown coefficients or and can be expressed in a matrix form: where or is the unknown vector of the field generating coefficients and the vector of given samples is . is the reconstruction matrix which has two different forms depending on the discrete model. For the Bessel-Fourier model is and for the Fourier model is

The straightforward approach to solve for the coefficient vector is to invert the matrix . Note that the structure of this matrix is determined only by the positions of the known samples. Some field specifications might use more samples to describe the fine details of a scene and less for the uniform regions. Such clustering of the samples causes large condition number of the matrix and inversion becomes numerically unstable. In addition, there might be noise and/or the scene samples can be inconsistent with the physical models of (6) and (8). As the inconsistencies might be of various origin and difficult to predict in the common case, it is reasonable to assume that their nature is also random as any eventual noise appeared upon the scene capture. These two factors can be accounted in a common random term : where now is the vector of given data samples, is the unknown coefficient vector, or depending on the model chosen for reconstruction, and is a random vector drawn from zero-mean normal distribution . Upon direct inversion, the high condition number of causes strong amplification of the random term and it dominates over the reconstructed coefficients .

5. Regularized Reconstruction

A linear system can be solved by taking the pseudoinverse with the help of SVD [36]. SVD decomposes an matrix as with and containing the left and right singular vectors of , respectively. , is an diagonal matrix with the value ordered singular values of . The Moore-Penrose pseudoinverse can be used to find a solution , provided the desired data vector as , where [36].

If the SVD pseudoinverse is applied to the left side of the noisy observation vector , the noise norm in the resultant reconstruction will be boosted by a factor of . In the case when the condition number of is large, is large as well, and the noise will be dominant in the reconstruction. A common remedy is to apply Tikhonov regularization, that is, to introduce a penalty term into the inversion matrix [28]. This is equivalent to solving the minimization problem that is, to minimizing the norms of both residual and solution. Tikhonov regularization balances between the contradictory requirements for small residual and small norm of the solution by the penalty term [28]. The minimum norm requirement ensures smoothness of the solution an thus robustness to noise, while minimizing the residual ensures that the solution is close to the target. Optimal value of can be determined automatically by the Morozov's discrepancy principle [28].

The SVD form of the Tikhonov regularized solution is very convenient for analysis purposes. However, finding the SVD of a matrix is of cubic complexity [36]. A more computationally attractive approach is based on iterative matrix solvers [28]. The minimizer of the Tikhonov functional in (18) is equivalent to the solution of the linear system [28]: The matrix is symmetric and positive definite. Therefore, it can be inverted with an iterative method, which builds the solution step by step, updating the solution vector each time until a desired accuracy is achieved [30]. In many cases the iterations converge fast enough to ensure lower complexity than the SVD-based Tikhonov solution. The conjugate gradient method is one of the most rapidly convergent and numerically stable algorithms [30]. It iterates as follows: (1)initialize , arbitrary, and . (2)For to (a)(b)(c)(d)(e)(f)(3)End.

CG method updates the solution at each iteration with a small portion along the search direction (step (2c)). The search directions are conjugate to each other, which guarantees convergence in at most iterations. Practically, the algorithm converges to a predefined accuracy in much less iterations . The most computationally expensive operation at each iteration is the matrix-vector product at step (2a) which requires operations. The total complexity of the algorithm is —lower than required by the SVD-based Tikhonov solution.

6. Experiments and Results

The light field reconstruction approaches presented in this paper are evaluated by controlled experiments on simulated data. The experiments are described in two subsections. The first subsection describes the details of input data simulation and the criterion for reconstruction performance evaluation, common to all experiments. The second subsection presents experimental results and discusses the benefits of the regularized reconstruction in various scenarios.

6.1. Data Simulation and Performance Evaluation

All experiments are carried out on synthetic data. The data is simulated using the observation model of (17), where the matrix is computed according to (15) for the B-F model or according to (16) for the Fourier model. The matrices relate generating coefficients with data samples, pseudorandomly distributed within a volume of interest.

The dimensionality of the reconstruction matrix must be the same for both models so that the reconstruction results are comparable. However, this dimensionality depends on different factors for the different models (cf. Section 3.3). For both models, nonzero coefficients are able to produce a Gaussian beam inside a square region of interest with size . The size of the region of interest is selected so that the adjacent replicas of the field produced by the Fourier generators do not overlap at distance comparable to . Figure 1 depicts the magnitude of the fields produced by the B-F generators and the Fourier generators by showing the values on a uniform grid which spans the whole region of interest.

The samples of the field used for its reconstruction are simulated by (17) on randomly selected positions within the volume of interest. The amount of points is chosen to be greater than the amount of nonzero coefficients . The underdetermined case is not considered interesting as the minimal norm solution might diverge substantially from the true underlying coefficient vector [27, 37]. In some experiments, part of the specified data points are selected to form clusters with size much smaller than the region of interest. The samples are randomly distributed according to uniform distribution both inside and outside of the clusters. The size and position of the clusters and the amount of points inside and outside the clusters define different sample density in different subdomains of the region of interest. Varying these parameters helps to investigate the effect of variable sample density within the light field support. The influence of these parameters on the ill-posedness of the reconstruction problem is examined during the first group of experiments. Based on the results, representative scenarios are selected to test the reconstruction approaches.

In a fair experiment, the reconstruction performance must be compared with the inconsistency of the input data. Hence, the inconsistency level and the reconstruction error must be measured according to the same quantity. Often one model is used to simulate the samples and another to reconstruct the light field, so that an inverse crime does not take place when experimenting on simulated data [28]. The models are characterized by sets of generating coefficients which cannot be compared directly. Instead, the sets of coefficients are used to compute the reconstructed light field within the whole region of interest on a uniform, rectangular grid , with and being the sampling steps. The error is measured on this uniform grid: Here, and are vectors formed by the field values computed on the grid out of the simulation and reconstruction models, respectively. An inconsistency level , comparable to this error, can be selected as a percentage of the energy of the field .

6.2. Experiments

The light field reconstruction problem becomes ill-posed and requires regularization when the spectrum of the reconstruction matrix ( or ) is widely spread and contains many small values. The structure of the matrices depends only on the specified data point distribution. A first group of experiments investigates the spectra of these matrices for different data point distribution scenarios. A second group of experiments illustrates the need and benefit of regularization even for some cases of well-distributed and physically consistent data points. Finally, the last group of experiments shows the benefits and limitations of the regularization for data distributions which make the reconstruction problem ill-posed. This group includes also an experiment which demonstrates a potential practical application of the considered reconstruction approach.

6.2.1. Matrix Singular Values

This group of experiments examines the spectra of and for various volume of interest sizes and specified data point densities. Spatial variations of the density are simulated by forming data point clusters of various size, shape and position.

The plots in Figure 2 represent the singular values of and for varying size of the region of interest. It has square shape and the amount of irregularly distributed points is selected as so that the matrices are overdetermined. No clusters are formed in this experiment. The vertical axis in Figure 2 represents the singular value , while the abscissa represents its index after the singular values have been sorted in a descending order, that is, . Both and show their most compact spectra when the dimensionality fits the size of the region of interest. Driving away from this matching size, the spectrum of both and spreads more. The matrix has well-concentrated singular values, while is still ill-conditioned for the matching size. Hence, in the absence of clusters, can still be used for reconstruction without regularization, as the next group of experiments will demonstrate. Irregularly distributed samples in a square region of interest do not have wide angular diversity and the Bessel-Fourier generators do not bring enough information to recover the coefficients from the samples. This causes wide-spread singular values of . It is possible to demonstrate that the spectra of become more compact if the volume of interest has wider size along than along .

Next experiments investigate how the presence of clusters influence the spectra of and . The basic experiment varies the size of a single square cluster, positioned in the center of a square region of interest of size . The experiments use points, half of which are distributed inside the cluster and the other half outside. The singular value plots in Figure 3 show that the matrix becomes ill-conditioned similarly to . A smaller cluster spreads the singular values more for both matrices. However, the spectra of are less sensitive to the changes of the size of the cluster, while the spectra of vary considerably. This is another illustration of the fact that the Bessel-Fourier generators produce a well-conditioned reconstruction matrix when the angular diversity of the samples is higher. On the other hand, this also shows that the Fourier model depends more on the transversal and longitudinal diversity of the sample positions, as the cluster size controls the sample density distribution. Some additional experiments vary the position of a fixed-size cluster within the region of interest and the amount of clusters. The amount of clusters does not influence significantly the spectra of both matrices, provided that the sample density within the clusters is kept constant. However, large amount of clusters spread the general distribution of samples, which improves the spectra of . The spectra of improve significantly when the cluster position goes closer to the origin, thus providing samples with higher angular diversity. The spectra of remain invariant to the position of the cluster within the region of interest.

The last experiments of this group investigates how the amount of given points influences the singular values of and . The simulated region of interest has size and contains a single cluster of 100 times smaller size, located in the middle. One of the experiments varies the amount of points outside the cluster from 64 to 1024, while the amount of points within the cluster is kept fixed to 192. The other experiment varies the amount of points inside the cluster from 64 to 1024, while the amount of the samples outside the cluster is kept fixed to 192. The minimum total amount of points is at least 256 in both experiments, so that the matrices are not underdetermined. The singular value plots of and show significant influence when the points outside the cluster are varied (Figure 4), while they remain almost invariant to the amount of points within the cluster. More points outside the cluster carry more information for both models and the singular values of the reconstruction matrices become more compact. The Bessel-Fourier generators benefit from new points outside the cluster as they carry new angles, unlike new points within the cluster. The Fourier generators benefit from more points outside the cluster as this increases the overall point density, while more points within the cluster keep the overall density the same. However, this benefit vanishes after the overall density goes above a certain level, which is theoretically sufficient to reconstruct any field for the chosen volume of interest and dimensionality .

6.2.2. Regularization for Unclustered Input Data Points

This group of experiments aims at demonstrating that an ill-conditioned reconstruction matrix needs regularization for reconstruction even for well-distributed and physically consistent input data. The benefits and limitations of the Tikhonov regularization method are illustrated for different levels of inconsistency of the input data. The data was simulated inside a square region of interest of size with the B-F model. The model uses nonzero coefficients. The amount of simulated points is so that the matrices and are overdetermined. The points do not form a cluster so that has compact spectrum and does not (Figure 2). At first, nonregularized reconstructions were done for different levels of inconsistency with errors shown in Table 1. The reconstruction based on the Fourier model shows practically zero error for physically consistent data (), while the reconstruction with B-F generators shows some very small, yet not zero error. For inconsistent data, the Fourier generators are able to reconstruct a field with an error comparable to the inconsistency level. At the same time the B-F generators model is unable to reconstruct the field even for very small inconsistency (). This is due to the random term amplification by the reciprocal of the small singular values of , as discussed in Section 5. In practice, the reconstruction from consistent data is done by applying an iterative technique based on the CG method on the reconstruction matrices and . The convergence of CG depends on the condition number of the matrix [30]. Hence, CG converges fast for whose condition number is small and very slow for which has high condition number (Figure 5). A regularization strategy is needed when the reconstruction matrix is ill-conditioned. Tikhonov regularization changes the singular values of a matrix by the parameter (cf. Section 5). Any change in the spectra of the reconstruction matrix impairs the accuracy of an eventual reconstruction. However, the small singular values are increased and the inconsistency random term is not amplified much. An optimal value of corresponding to a minimal error is found by Morozov's principle. Such value of optimizes the condition number of so that the CG-based Tikhonov regularization converges faster with a minimal final error (Figure 6). The final error is very close to the inconsistency level . Thus, the regularization of helps to acheive very similar convergence and error as reconstructing with the well-conditioned .

6.2.3. Benefits and Limitations of Regularization for Clustered Data Points

The last group of experiments aim at illustrating the benefits and the limitations of the regularized field reconstruction when the specified data point distribution contains clusters. In such scenarios, both matrices and have widely spread spectra. In this context, it is interesting to investigate how the factors affecting the spectra of and influence the regularized reconstruction performance. The factors which have the greatest influence on the spectrum are the size of the cluster and the amount of points outside the cluster. The performance is assessed by the reconstruction error and the convergence rate of the CG-based Tikhonov regularization.

The first experiment of this group investigates the performance of the Tikhoniov regularized reconstruction for different levels of inconsistency of the input data. The B-F generators are used to simulate the data points inside a square region of interest of size . The total amount of points is and half of them form a cluster of 100 times smaller size than the region of interest. The cluster is located in the center of the region of interest. The field was reconstructed from the known samples by the SVD-based Tikhonov regularization on the matrix for different values of the regularization parameter . The plots of the reconstruction error versus are shown in Figure 7. The plots show that the Tikhonov regularization is able to compensate completely for the clusterization of inconsistent input data. If the samples are physically consistent, only an approximate solution can be achieved with sufficiently low error rate. Figure 8 shows the convergence rate of the CG-based Tikhonov regularized reconstruction when the level of inconsistency of the input data is . The value of the regularization parameter was computed by Morozov's principle. The B-F model leads to slightly faster convergence rate than the Fourier model, demonstrating the existence of an inverse crime. Comparison with Figure 6 shows that both models have similar performance compared to the case of nonclustered data. In this case, the Tikhonov regularization is able to compensate completely for the ill-posedness caused by data clustering.

Next, it is interesting to test if decreasing the size of the cluster further affects the performance of the regularized reconstruction as it affects the spectra of the matrices and (Figure 3). In this experiment, the data inconsistency level is fixed to and the size of the cluster is varied. All other parameters of the scenario are the same as in the previous experiment. Figure 9 shows the convergence of the CG-based Tikhonov regularized reconstruction involving the Fourier generators. The optimal regularization parameter for each curve is obtained by Morozov's discrepancy principle. The optimal value is smaller for smaller cluster sizes, and hence the condition number of the reconstruction matrix increases and the convergence of the CG algorithm becomes slower.

Another factor which influences the spectrum of the reconstruction matrices and is the amount of given data samples outside the cluster (Figure 4). It is interesting to check the influence of this amount on the reconstruction error and convergence of the CG-based reconstruction. The B-F generators are used to simulate the data points inside a square region of interest of size with level of inconsistency . 192 points form a cluster in the middle of the region of interest with size 400 times smaller than this region. The amount of points outside the cluster is varied from 64 to 1024 so that the total amount of points is always at least 256 and and are not underdetermined. Figure 10 shows the convergence plots of the CG-based reconstruction done with the Fourier model. The lower reconstruction error clearly indicates the benefit of having more data points outside the cluster, showing lower minimal error rate. The optimal value of the regularization parameter increases together with the amount of points. This decreases the condition number of the regularized reconstruction matrix and speeds up the convergence rate.

The last experiment investigates a sample distribution scenario which occurs for high-resolution light field reconstruction from multiple low-resolution and limited-aperture CCD recordings. Such a scenario is investigated to demonstrate the practical applicability of the proposed reconstruction approach. In addition, this experiment reconstructs a light field which originates from a realistic object as a more sophisticated distribution. The reconstruction is done again with one transverse dimension only for the sake of comparability with the other experiments. The object at the initial line is 256-sample vertical strip from the “Lena” image, which is used as a standard test material in image processing. The bandwidth of this object contains 256 nonzero frequency components, which coincide with the coefficients of the Fourier generators model. The light emerging from such an object is propagated within a region of size which makes the effective bandwidth of the image  rad/mm. The CCD sensors are considered to have twice lower frequency resolution, that is, sampling step of and size —twice smaller than the transversal extent . This setting results in 64 samples per CCD. Samples of the light field are captured by 6 different CCD recordings obtained at 6 different CCD positions on a transversal line, such that the oversampling factor is 1.5. The positions of the CCDs are selected such that the total amount of samples is distributed denser around the origin and sparser towards the side of the line, as shown in Figure 11. In this manner, the sampling approximates a density distribution, optimal in the sense of [11]. The higher sample density at the center forms a cluster which spreads the singular values of the reconstruction matrix , as evident from Figure 11. Complex values at the CCDs sample positions are simulated from the coefficients of the Fourier generators model. In practice, complex values can be obtained by, for example, temporal phase shifting for each CCD position. The CCD noise is simulated by adding an inconsistency with level to the simulated CCD sample values. The regularized, CG-based reconstruction shows rapid convergence to an error comparable to the inconsistency level, as illustrated in Figure 12.

7. Conclusion

This paper has addressed the problem of monochromatic light field reconstruction from irregularly distributed samples with physically inconsistent values. The proposed reconstruction method is based on finite-dimensional modeling of the problem and regularized inversion. Our approach encompasses a wide range of applications in the area of 3D display and beam shaping. The dimensionality of both models can be directly related to the number of degrees of freedom of the field with a certain excess. The excess depends on the ratio between the transversal extent and volume of interest for the Fourier model and on the ratio between the finest detail level and wavelength for the B-F model. Uniform sample density defines the reconstruction problem as well posed when described by the Fourier model and as ill-posed when described by the B-F model. For such density, satisfactory reconstruction can be done with amount of points close to the number of degrees of freedom of the field. However, spatial variations of the density define the reconstruction problem as ill-posed for both models. Regularized reconstruction is able to compensate for the ill-posedness when the sample density variations are not very diverse within the region of interest. The reconstruction error and convergence of the iterative solver are very similar to the well-posed case. Large variations of the sample density increase the reconstruction error above the level of inconsistency of the input sample data. In this sense, increasing redundancy in the amount of samples away from the number of degrees of freedom of the field brings clear benefit.

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive comments and recommendations which helped to improve the manuscript. This work was supported by the Academy of Finland (project No. 213462, Finnish Programme for Centres of Excellence in Research 2006–2011 and project No. 137012 “High-Resolution Digital Holography: A Modern Signal Processing Approach”).