Abstract

Due to the influence of atmospheric turbulence, a time-variate video of an observed object by using the astronomical telescope drifts randomly with the passing of time. Thereafter, a series of images is obtained snapshotting from the video. In this paper, a method is proposed to improve the quality of astronomical images only through multiframe image registration and superimposition for the first time. In order to overcome the influence of anisoplanatism, a specific image registration algorithm based on multiple local homography transformations is proposed. Superimposing registered images can achieve an image with high definition. As a result, signal-to-noise ratio, contrast-to-noise ratio, and definition are improved significantly.

1. Introduction

Anisoplanatism continuously hinders the performance of spatial object observation by ground-based telescopes. Anisoplanatism is one of the key factors to directly influence spatial resolution in wide field imaging. In telescope imaging theory, the object spot keeps invariant in the scope of isoplanatic angle, and it has the same systematic point spread function (PSF), while if the imaging angle is out of the isoplanatic angle scope, the PSF is deemed to be varying with spatial position changes.

In 1990, Labeyrie [1] published a paper to present a method on image restoration by spot inference. By using spot inference, Ozkan et al. [2] proposed a method to recover the extended spatial target phase spectra by using crossspectra; combining with the spot inference method, the extended astronomical image is restored. After that, Jin et al. [3] proposed a method to restore the extended spatial target phase by using bispectra, and the bispectra method is spatially invariant compared with the crossspectra method. Ayers and Dainty [4] innovatively proposed an image restoration concept by using blind deconvolution algorithm in the spatial target observation field. Zhang [5] adopted an iterative method proposed by Dainty to perform the calculations between spatial and spectral domains under certain restrictions and achieved a PSF that can map an image with the highest similarity to the real situation. Wang et al. [6] optimized Dainty’s method and added a filter to enhance robustness to noise, whereas in real spatial target observation procedure, a wide field extended spatial target is most likely to appear, and the visual field is much wider than the isoplanatic angle scope. From the experiment, the image restoration performance is impaired severely after the process of spot imaging or blind deconvolution. In spite of imaging via a self-adaptive optical telescope, the anisoplanatic effect is still salient [7, 8]. In order to overcome the shortages as mentioned above, Paxman et al. [9] proposed a method to simulate the anisoplanatic effect by using a wavefront coding technique. Thereafter, multiple frame superimposition was adopted instead of long-term exposure imaging to reduce the effect of anisoplanatism, and image definition was also improved [10]. Multiple local nonrigid transformations to impair spatial displacement and deformation among images were adopted [11, 12], as well as multiple images after transformation superimposition to realize image restoration. In order to improve image registration accuracy, establishing the image transformation model is necessary to proceed with various spatial transforms. Dan [13] proposed estimating the transformation model among short-exposure images with anisoplanatic effect based on neural networks, and the anisoplanatism is reduced by using image registration.

The homography matrix is a kind of image spatial transformation model [14, 15]. Adopting homography transformation on each pixel of an image can realize linear or nonlinear transformation such as shifting, rotation, stretching, affine transformation, and prospect transformation [16, 17]. Some scholars propose using the block concept to calculate the homography matrix to achieve higher accuracy [1820]. A coarse-to-fine method involves adopting image registration on the whole field to make the key points well-distributed. After that, a more accurate image registration on local field is used as a block process based on multiscale analysis to achieve better alignment performance [21]. In the procedure of local image registration, the block process proceeds based on the clustering method. A spectral clustering algorithm based on stratification consensus is proposed as an optimized clustering strategy to partition the image into several blocks [22]. A swarm optimization algorithm is proposed to adjust the clustering center position iteratively. Thus, the identification and mutual independence of each cluster are enhanced [23, 24]. The feature distribution is always overlooked in the procedure of block image registration. Deng et al. [25] propose a method that involves obtaining the feature distribution as a first step. Then, the image is partitioned into a number of blocks using clustering. Next, the feature points are assembled to the cluster centers along the margin directions and form a certain number of clusters. Lastly, block image registration on cluster scope between each two corresponding clusters is adopted to obtain a better image registration performance.

This paper considers the multiframe image registration and superimposition concept to improve the image quality. Superimposing multiple low-resolution and noisy images can reduce the background noise and random drift. Image registration is deemed as an effective method to estimate displacements and deformations between two images. Thus, the transformation model is established after such estimation.

Our contributions are as follows: (1) the effects of atmospheric turbulence cause astronomical images taken from the ground to have very little texture detail, and edge structures are not obvious and contain a lot of background noise. This makes it difficult for traditional algorithms (Scale Invariant Feature Transform (SIFT) [26], Speeded Up Robust Features (SURF) [27], Binary Robust Invariant Scalable Keypoints (BRISK) [28], and Features from Accelerated Segment Test (FAST) [29]) to extract feature points from astronomical images. In this paper, the Adaptive and Generic Accelerated Segment Test (AGAST) [30] algorithm and the DAISY [31] descriptor are used to extract and describe feature points of the image to overcome the above-mentioned shortcomings, so that a large number of key points with uniform distribution can be detected while maintaining high computing efficiency. (2) In order to further improve the calculation efficiency, this paper proposes a method based on the maximum entropy model for the selection of feature points and uses a method based on third-order statistics that is better than the Principal Components Analysis (PCA) algorithm [32, 33] to reduce the dimension of feature description vector. (3) The Fuzzy -means (FCM) clustering is applied to feature points, and multiple block homography matrices are calculated between template image and image to be registered by the improved Random Sample Consensus (RANSAC) algorithm [3436]. (4) The image is registered in blocks and superimposed to improve the quality of astronomical images.

The remaining part of this paper is organized as follows. Section 2 presents theories and discussion of the adopted method. Section 3 describes the experiment analysis, which analyzes the evaluation procedure of results processed by the proposed methods. Section 4 concludes with the strategies used in and theoretical basis of this paper.

2. Methods

The key issue in determining the relationship between two images captured at different times is establishing the transformation model. The geometrical transformation model is obtained via corresponding fixed points and their feature vectors or descriptors. Taking the unrobustness of feature point descriptors into consideration, an image matching error may occur. In this paper, the geometrical transformation model is calculated by homography estimation to solve the problem of matching errors. A point set is presented as where and are two points in one image.

Homography matrix can be used to describe the spatial corresponding relationship between two images, as shown in Figure 1. The two points in two images can be transformed by each other via plane, presented as . The calculation of the homography matrix is done by selecting four points in each image, respectively, and a linear equation set is obtained. Then, the parameters of the homography matrix are obtained by soling such an equation set.

A proper homography transformation model is judged by measuring the Euclidean distance among the fixed points. The distance is deemed as a matching error, presented as where denotes the Euclidean distance between the fixed-point set and its corresponding set.

In this paper, the traditional image registration algorithm based on the homography matrix is further improved. A multiframe astronomical image registration algorithm based on block homography estimation is proposed. Figure 2 is the algorithm flow chart of the whole astronomical image registration process.

2.1. Reduced Feature Descriptor Calculation

The extraction and description procedures are preliminaries of homography estimation. This paper adopts an AGAST-DAISY algorithm on the template image and input image, respectively. With astronomical images, AGAST operator may detect a large number of key points. Therefore, a maximum entropy model-based method is used to refine the key points, and much of them will be reduced. Firstly, we take all the pixels in a whole image into consideration. In particular, a boundary piecewise process is used to deal with the pixels on the image boundaries, as shown in Figure 3. Then, the conditional probability distribution model on the neighbor of each pixel is calculated, as Equation (3). Assume that each key point yields to Gaussian distribution on its neighbor. where denotes the position of one pixel. and denote the mean value and variation of one pixel on its neighbor. Assume that the mean value is approximate to the expected value. and are the marginal probability and the expected value of axis , respectively, shown as Equation (4) and Equation (5). The variation of Equation (3) can be presented as Equation (6).

Solving out in Equation (4), the marginal probability distribution can be obtained. Putting the solved into Equation (5), the expected value is calculated as follows:

After the calculation of the conditional probability distribution of each key point, the entropy of one pixel on its neighbor is obtained as well, as shown in the following equation:

A certain part of the key points with the highest entropy value is selected, and these pixels are deemed the refined key points, as shown in Figure 4.

Next, we adopt the DAISY operator to generator feature descriptors for the key points. Each key point has its descriptor with the dimension of . Thus, dimension reduction is necessary to be conducted for the consideration of efficiency. In order to overcome the shortcomings of the PCA algorithm on dimension reduction applications, we propose a method based on third-order statistics to conduct dimension reduction. To a feature vector with the dimension of , the three-order accumulation is calculated as shown in the following equation: where when or , , and is an integer number, . Then, the bispectrum of the feature vector is calculated by adopting Fourier transformation on the three-order accumulation. In particular, the frequency range of Fourier transform is set to Hz, and Fourier transform is adopted on such frequency intervals, Hz, Hz,…, and Hz. Thus, the feature vector bispectrum on 8 subfrequency intervals is calculated as where and are the spectral range of the bispectrum. The calculated bispectrum on 8 subfrequency bands is presented as . Next, take the diagonal slice of each subfrequency band of the bispectrum, and select the maximum value of such diagonal slice as the element to form the feature vector. Thus, each feature vector contains 8 elements, shown as Equation (10) and Equation (11). Consequently, the dimension of the descriptor is reduced to , where and is the maximum value of the th subfrequency band of the bispectrum. Therefore, the descriptor can be presented as .

From Figures 57, the results obtained by using PCA have poor identity and discrimination among the existing features. Thus, it is difficult to distinguish certain features after the process by PCA, while the three-order accumulation processed feature vectors have much better identity and discrimination which can be used to figure out the certain features easily. Therefore, using three-order accumulation processed feature vectors in feature matching can achieve a superior performance.

2.2. Image Registration Based on Block Homography Estimation

The determination of the number of clusters plays an important role in clustering. We use the Jeffrey divergence () method to determine the number of clusters. The method is set as an object function in optimized polynomial form, shown as Equation (12). After recursive calculation of clustering, the value reaches its minimum value while the gap value reaches its maximum value simultaneously. The maximum intracluster compactness and the maximum intercluster discrimination are traded off, as well as . The relationship curve between the gap value and number of clusters is shown in Figure 8(a). We record 100 iterations of gap calculation, and the maximum gap with respect to the number of clusters is shown in Figure 8(b).

The relationship curve between the gap value and number of clusters is shown in Figure 8(a). After 100 iterative calculations of gap, the result shows that the gap value reaches the maximum value when the number of clusters is 4, , and the gap value is greater than 60. Thus, the most proper clustering can be performed under such conditions. As a result, the maximum intracluster compactness and the minimum intercluster overlap are achieved simultaneously, as shown in Figure 8(b). The log of square summation of the number of points in one cluster reaches the minimum value when , which means the highest correlation among the intracluster points is achieved under such conditions, as shown in Figure 8(c): where and stand for the different clusters after clustering. is the covariance matrix of the th cluster. is the mean value of the pixels in the th cluster.

Determination of the number of clusters is a prerequisite of conducting clustering. In this paper, the number of target points is assumed to be , presented as . The cluster centers are formed by using clustering, presented as . In particular, the object function of clustering can be deemed to be the multiple target optimization procedure. In this paper, we set the FCM clustering function with the minimum entropy model as the object function. The optimized object function is obtained when the two components reach the minimum value simultaneously, as shown in where denotes the number of iterations, . is the membership function, which represents the probability of theth point,, which belongs to theth cluster, as shown in Equation (16). Equation (17) represents the relationship between and the th cluster center , where is the descriptor of the th key point with the dimension of . denotes the descriptor of the th cluster center. is the number of key points. Equations ((15)–(17)) are the iterative optimization procedures. Firstly, set the initial value of the membership function as , which is a random value in the interval of . Then, set the terminate condition , . When object function fulfills the condition , the iteration is terminated, and the optimized object function is obtained. Thereafter, all the cluster centers are calculated, and the fuzzy clustering procedure is completed. The maximum correlation principle is adopted, as shown in Equation (18), on the clusters generated via FCM clustering. Then, the points with the maximum correlation to the cluster centers are found. In Figure 9, these found points are classified into the equivalent number of classes to the clusters. Lastly, blockwise homography estimation is conducted to realize local image registration, as shown in Equation (19), where and represent the DAISY descriptor in two images, respectively.

2.3. Fixed Points for Homography Estimating

The homography matrix is a matrix with 8 degrees of freedom (DoF). Therefore, solving the 8 unknowns requires at least 4 noncolinear point positions in the coordinates. In this paper, it is necessary to conduct the homography estimation process as stated above on each cluster or block. Here, the neighbor around one pixel with the size of is taken, and the probability distribution of this pixel on this neighbor area is calculated. Then, the entropy is obtained. The points with maximum entropy were selected. The value is calculated via where and are the number of fixed points. Thus, needs to fulfill the condition . Here, and. Select in the interval of 16~20. Thus, the iteration time of RANSAC is reduced from 160 to 20, and the computation efficiency is improved significantly. Equally, this optimization procedure is used on each corresponding cluster pair between two images, respectively. Four fixed points among thepoints in each cluster are selected, and the homography matrix is obtained via Equation (21) and Equation (22). Next, as shown in Figure 10, the point set and is calculated under the Levenberg-Marquardt principle to obtain the total error of forward and backward propagation, as shown in Equation (23):

The calculation with Equations ((21)–(23)) is performed recursively times. Therefore, the error between the point set and its mapping set at each time is presented as , and the minimum value is selected, as shown in Equation (24). The calculated homography matrix is the optimized one via the above-stated method,

Through the RANSAC algorithm, we can easily find out the best four feature points in each cluster to calculate the best homography matrix of this cluster. Through the geometric relationship of these four points, we can find out the center point of each cluster and then judge each pixel point in the whole image with these center points; the pixel is close to any cluster of center point, and the homography matrix is used to transform the pixel.

3. Experimental Results and Analysis

3.1. Data Acquisition

In the experiment, we captured 10 frames of Saturn images and Mars images by snapshotting from the video with the same time interval. Due to the effect of atmospheric turbulence, the snapshotted images are blurred intensively. Meanwhile, sophisticated deformations exist in the images as well. The imaging system is set up as follows: the lens diameter of the telescope is  m; atmospheric turbulence is presented as , where  m. The wavelength of the visible light is . Thus, the cutoff frequency is Hz. The size of the captured Saturn images is , while the size of the Mars images is .

3.2. Evaluation Standard

In this paper, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), root mean square error (RMSE), and power spectrum are adopted to illustrate the performance of using different methods. As shown in Equations ((25)–(29)), where denotes the useful signal. denotes the noise in one image, where represents the mean pixel value of the target area. represents the mean pixel value of the background area. is the standard deviation of the pixels in background area, where and represent the size of an image. is the mean value of the pixels of a whole image.

The autocorrelation matrix of an image is shown as

The logarithm power spectrum is presented as where stands for the pixel value of the point at the position in the coordinates. is the matrix after the Fourier transformation on the autocorrelation matrix.

In this paper, we adopt the AGAST operator to detect key points in Saturn images and Mars images. Then, the key points are refined by using the maximum entropy method, and the refining ratio is set to 0.3 to make all the points located at the most feature-representative area, as shown in Figure 11. In the order of left to right of each row are the raw data, the image with AGAST key points, and the image with refined AGAST key points. This paper adopts a block homography estimation method based on FCM clustering. The images processed by FCM clustering form a certain number of clusters, as shown in Figure 12. In Figures 12(a) and 12(b), from left to right are the raw data and the key points in the first image and second image after clustering. Different colors denote different clusters. Each class of the key points forms a certain block.

3.3. Evaluation Standard of Noise

Figures 13 and 14 are the processed results of Saturn image and Mars image, respectively, and show two images captured at different times, 10 image superimposition, 3 images processed by the proposed method superimposition, 5 images processed by the proposed method superimposition, and 10 images processed by the proposed method superimposition. Figures 13(g) and 14(g) show the zoomed in images of the part framed by the green box in Figures 13(a)13(f) and Figures 14(a)14(f). It can be seen that the image noise is being depressed with the increasing of the superimposed number of images.

Figures 15 and 16 show that, with the increasing of the number of the superimposed images, SNR and CNR are increasing, while the RMSE is decreasing. However, the evaluation index tends to become steady when the superimposed number of images reaches 5, as shown in Tables 1 and 2.

3.4. Power Spectrum Analysis

Figures 1720 show the logarithm power spectrum of the Saturn image, one-dimension logarithm power spectrum of the Saturn image, the logarithm power spectrum of the Mars image, and the one-dimension logarithm power spectrum of the Mars image. As can be seen in Figures 18 and 20, the intensity of the one-dimension logarithm power spectrum of the image processed by our proposed methods is lower than that of the original image and the image processed by some classic methods. Therefore, more energy is centered on the low-frequency domain. Consequently, the proposed algorithm outperforms the other methods even the state of the art on image quality enhancement.

4. Conclusion

In this paper, an image quality enhancement method based on multiple image registration and superimposition is analyzed and discussed. A series of images captured from a video captured via an astronomical telescope are aligned at different times. The major contribution of this paper is improving image registration accuracy and computation efficiency. As regards computation efficiency, the AGAST operator is first adopted to extract key points, and a key point refining method based on maximum entropy is used to reduce the large number of key points to a much smaller one. Then, we design a method by using the principle of high-order statistics to reduce the DAISY descriptors of densely located key points. Last, but not the least, a maximum entropy model-based method with RANSAC is used to decrease the iteration times in the homography estimation.

To improve accuracy, a block process concept is adopted in this paper to conduct the multiple local image registration. The blocks are generated by FCM clustering. Specifically, the object function is set under the condition of minimum propagation error. The blocks are obtained via iterative calculation of cluster centers and maximum correlation to the cluster centers. As a result, the highest compactness of the intrablock points and relatively high independence of the interblock points are achieved. Moreover, multiple registered image superimposition has an effective impact on noise suppression and pixel drifting calibration, which substantially improves SNR and CNR.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This study was financially supported by the National Natural Science Foundation of China (NSFC) (Grant Number: 61471304).