Ghost imaging based on Pearson correlation coefficients*
Yu Wen-Kaia),b), Yao Xu-Ria),b), Liu Xue-Fenga), Li Long-Zhena),b), Zhai Guang-Jieb)†
Key Laboratory of Electronics and Information Technology for Space System, Center for Space Science and Applied Research, Chinese Academy of Sciences, Beijing 100190, China
University of Chinese Academy of Sciences, Beijing 100049, China

Corresponding author. E-mail: gjzhai@nssc.ac.cn

*Project supported by the National Key Scientific Instrument and Equipment Development Project of China (Grant No. 2013YQ030595) and the National High Technology Research and Development Program of China (Grant No. 2013AA122902).

Abstract

Correspondence imaging is a new modality of ghost imaging, which can retrieve a positive/negative image by simple conditional averaging of the reference frames that correspond to relatively large/small values of the total intensity measured at the bucket detector. Here we propose and experimentally demonstrate a more rigorous and general approach in which a ghost image is retrieved by calculating a Pearson correlation coefficient between the bucket detector intensity and the brightness at a given pixel of the reference frames, and at the next pixel, and so on. Furthermore, we theoretically provide a statistical interpretation of these two imaging phenomena, and explain how the error depends on the sample size and what kind of distribution the error obeys. According to our analysis, the image signal-to-noise ratio can be greatly improved and the sampling number reduced by means of our new method.

Keyword: 42.25.Kb; 42.30.Va; 02.50.Cw; coherence; image forming and processing; probability theory
1. Introduction

In traditional ghost imaging (GI), an optical source generates a signal beam and a reference beam at a beam splitter; the signal beam is used to illuminate an object, after which its total intensity is collected by a bucket detector with no spatial resolution; in the other arm the spatial distribution of the light is measured by a high spatial-resolution detector. Neither detector can “ see” the object on its own, but a “ ghost” image can be retrieved by cross correlating the outputs of the two detectors. The initial experimental demonstration of GI used the entangled signal and idler photons generated from spontaneous parametric downconversion, [1, 2] hence ghost image formation was ascribed to the quantum entanglement of the photons.[3] A controversy soon arose, however, when theory[4] and experiment[5, 6] showed that GI is also achievable with pseudo-thermal light obtained by passing a laser beam through a rotating ground-glass diffuser, or with true thermal light.[7] This sparked a warm discussion on the nature of thermal light GI.[8, 9] As a consequence, GI with a classical source aroused the interest of many groups and developed numerous branches, such as computational GI, [10] compressive GI, [11] differential GI, [12] optical encryption, [13] adaptive compressive GI, [14] and lensless GI with sunlight.[15]

In particular, Luo et al.[16, 17] found a seemingly completely nonlocal form of imaging in which conditional averaging of the reference measurements could improve the image visibility with fewer exposures and reduced computation time. They called this technique correspondence imaging (CI). However, no strict analytical proof was provided. Soon after, Wen[18, 19] offered some theoretical explanations on their findings, while Shih et al.[20] suggested a quantum interference model. However, Ref. [20] failed to show why negative images can be formed in thermal CI, or why the visibility can be substantially improved beyond the 1/3 limit for thermal light second-order correlation. In contrast, these were explained to some extent by Wen in Refs. [18] and [19]. This CI method further demonstrates the importance of intensity fluctuations in nonlocal imaging with thermal light.[21, 22] Later, in Ref. [21], Yao et al. also gave a statistical optics model to explain this CI phenomenon. Further experimental and theoretical developments of this positive-negative image concept have been presented in many other papers.[2326] In this paper, we present a new statistical explanation which we hope provides deeper insight into the essence of CI. Moreover, on the basis of this interpretation, we propose a more rigorous and standard analysis of GI where a ghost image can be obtained by calculating a Pearson correlation coefficient between the bucket detector intensity and the brightness at a given pixel of the reference detector, then for the next pixel and the next, and so on. This method further answers what kind of distribution the error obeys and how the error depends on the sample size, i.e. the number of measurements.

This paper is organized as follows. In Section 2, we experimentally demonstrate ghost imaging based on Pearson correlation coefficients (GIPCC) in a computational GI scheme and present some results to compare its performance with other GI reconstructions. Then, in Section 3, we give a statistical analysis of CI positive-negative image formation and an explanation of our imaging approach. We show that the values recorded by a bucket detector play the role of a selection criterion in CI, and can also be used to calculate the correlation coefficients between the bucket brightness and the brightness at each pixel of the frames. Our analysis thus casts doubt on whether CI is a rigorous and standard approach to obtain ghost images, while demonstrating that GIPCC is a better way of reconstructing binary objects with high image quality and fewer measurements. Finally, in Section 4, we briefly summarize this work.

2. Experiment and results

Our experiment is based on computational GI, as shown in Fig. 1. The advantage of computational GI is that it can be performed without a high spatial resolution detector, and it only needs to pre-compute the spatial distribution after a wide range of free space propagation distances. Here we use a digital micromirror device (DMD) consisting of 1024 × 768 micro-mirrors each of size 13.68 × 13.68 μ m2, rather than a spatial light modulator, to generate random spatial distributions. Each mirror can be oriented at + 12° and − 12° away from the initial position, thus the light falling on it will be reflected in two directions. The frame frequency (modulation frequency) of the DMD is up to 32552 Hz. In our experiment, the light from a halogen lamp of 55 W power is projected onto the DMD, first passing through an aperture diaphragm and a beam expander. Random binary patterns are encoded onto an area of 160 × 160 micromirrors (pixels) of the DMD. We image these random patterns IR onto an object which is a black-and-white film printed with “ A” . Only the light reflected from the micromirrors oriented at + 12° is projected onto the object. Then a convex lens is used to collect the corresponding total light intensity IB into a bucket (single-pixel) detector. Here we use a 1/1.8 in. charge-coupled device (CCD) with 1280 × 1024 pixels and an exposure rate of 26 frames per second as the bucket (single-pixel) detector, by integrating the gray values of all the pixels in each exposure. A ghost image of the object can be reconstructed by cross correlating the random binary patterns IR with the bucket intensity IB.

The recorded total intensity IB from the bucket detector is random because of the random modulation of the DMD and the purely stochastic intensity fluctuations of the thermal light source. We calculate Δ IB = IB – 〈 IB〉 , then plot the probability distribution of Δ IB. By using 〈 IB〉 as a boundary, we divide {IB} into two subsets:

which contribute to the left and right half of Fig. 2.

Fig. 1. Experimental setup for computational GI. A halogen lamp illuminates the DMD through an aperture diaphragm and a beam expander. The reflected random patterns are imaged onto an object which is a black-and-white film printed with “ A” and then collected by a bucket (single-pixel) detector. IR: binary random patterns encoded on the DMD. IB: total intensity recorded by the bucket (single-pixel) detector.

Fig. 2. Probability distribution of Δ IB, where Δ IB = IB – 〈 IB〉 . The left side (blue) is binned for Δ IB < 0, and the right side (pink) for Δ IB > 0. The dashed line denotes 〈 Δ IB〉 .

Statistically, each element of the average matrix 〈 IR〉 is a constant, which only reveals the intrinsic fluctuations of the lamp source hitting this pixel, rather than any information about the object. Amazingly, according to Luo’ s method, after the partition, a positive (or negative) correspondence image can be retrieved by only averaging the IR frames corresponding to B+ (or B), identified by the subscript “ + ” (or “ − ” ):

Actually, artificially dividing the 〈 IR〉 into + and is not standard, so is not optimal. Moreover, Luo’ s procedure could not answer what kind of distribution the error obeys, and how the error depends on the sample size. In this letter, we wish to report another interesting imaging approach based on Pearson correlation coefficients to answer these questions.

First, we flatten the two-dimensional image pixel matrix into a one-dimensional column vector, and denote it by x. Our starting equation is

where B is m × 1, R is m × n, x is n × 1, m is the sample size (total number of frames), and n is the total number of pixels. Each pattern distribution can be shaped into a row vector, then m random distributions can be rearranged row by row into a measurement matrix R. That is, each row of R records the brightness of pixels in one frame. As performed in the experiment, the actual random patterns sequentially fed into the DMD only have two values, 0 or 1, corresponding to the ± 12° angles at which the micromirrors are deflected.

For simplicity, here we only consider a binary object mask, either transparent (1) or opaque (0). The simplest model of GI is the probabilistic one: a transparent pixel correlates with the bucket signal while an opaque pixel is independent of the bucket signal. The principle of CI is that the frames with above-average bucket signals correspond to a good overlap between the reference frame and the transparent object, while those with below-average signals tend to have a good overlap with the opaque object. Here we give a more intuitive picture of the preference for a transparent pixel (i.e. imaging) as follows. The definition of a Pearson correlation coefficient (which is just a number) is well known.[27] Here we calculate the sample Pearson correlation coefficient rj (corresponding to a pixel j in x) between B and a column vector Rj of R. The coefficient rj is given by

where cov is the covariance, σ the standard deviation, and − 1≤ rj ≤ 1, j = 1, 2, … , n. A positive (negative) value of rj means a positive (negative) linear relationship, while rj = 1 or − 1 occurs only when all the points of the scatter plot lie exactly on a straight line. The value 1 denotes total positive correlation, 0 no correlation, and − 1 total negative correlation. Thus rj measures only the linear relationship between Rj and B, j = 1, 2, … , n. If rj is significantly greater than 0, then pixel j is coded as 1 (transparent). After j calculations, we will obtain a sequence vector r of length n, then we reshape this one-dimensional vector back to a two-dimensional array of υ × ν pixel size, where υ × ν = n. This array is the image recovered by GIPCC, which regards each sample Pearson correlation coefficient rj as the j-th pixel of the image.

For a quantitative comparison of the image quality, we introduce the peak signal-to-noise ratio (PSNR) and the mean square error (MSE) as figures of merit

where , To represents the original image consisting of υ × ν pixels, and stands for the retrieved image. Naturally, the larger the PSNR value is, the better the quality of the image recovered.

To check the quality of GIPCC, we made a comparison of GIPCC with other GI approaches. Figure 3(a) shows a direct image of the object illuminated by random DMD patterns, taken by a CCD camera. The reconstructions of GIPCC, background-subtracted correlation function Δ GI = 〈 IRIB〉 – 〈 IR〉 〈 IB〉 , and the normalized correlation function are shown in Figs. 3(b)– 3(d). Results of CI (+ , , and Δ G= + ), are given in Figs. 3(e)– 3(g). For a fair comparison, we calculate the PSNR with a fixed total number of measurements of 11940. From the PSNR values, we can see that the image quality of GIPCC is better than the other three GI approaches.

Fig. 3. Comparison between GIPCC and other conventional GI techniques which reconstruct the full 160 × 160 pixel image. All figures are automatically gray-scale compensated. (a) The direct image of the object. (b)– (d) Images retrieved by GIPCC, Δ GI, and g2 with 11940 patterns and a PSNR of 8.312, 8.311, and 8.188 dB, respectively, (e) and (f) are R+ positive and R negative correspondence images, from 5882 and 6058 patterns, with PSNRs of 7.425 and 5.562 dB, respectively, (g) is the image Δ G obtained by averaging all 11940 patterns but with G inverted, with a PSNR of 8.088 dB.

3. A statistical interpretation

In mathematics, the support of a function is the set of points where the values are not zero. Suppose that the object x has k nonzero entries, and define its support set as S, where | | S| | 0 = k. Here the l0 norm is defined as , where #{· } represents the number of elements in the set. Actually, this is a very intuitive measure of sparsity of a vector x, counting the number of nonzero entries in it.[28] We let RS denote the sub-matrix of R that contains all the columns corresponding to the support set. Similarly, R+ and R stand for the sub-matrix of R but consisting of all the rows which correspond to B+ and B, respectively.

Let us recall the initial second-order correlation function G(2) = 〈 IR〉 〈 IB〉 where the bucket signal can be treated as weights. When all the elements in B are the same constant c, G(2) turns out to be c which in fact cannot reflect any information of the object. If we normalize this bucket signal to a binary sequence taking two values “ 1” and “ 0” (B for “ 1” , and B < for “ 0” , or vice versa), then we will get a positive (or negative) image with high image quality. Thus CI is indeed amazing in that it can transform the poor performance of an initial second-order correlation into a good one. Here we build a model to further explain this phenomenon. We have the conditional probability for B±

and the total probability formula

where B± is for ± (B) > 0 and Γ ± = ∫ B± P(B)dB > 0. Denote the mean of P± (R) and P(R) by ± and , respectively,

so (+ ) and () have opposite signs. The terms in 〈 (R)(B)〉 are of two forms indexed by ± . The one in support set S indexed by + is

The terms with + replaced by − have the same sign. Since the two factors (+ ) and () both have opposite signs from that of the ± term, we have and , which makes it possible to generate a positive or negative image through selective averaging of R.

The column vector Rj can be treated as a time sequence of reference light field intensities, and each element in B can be seen as the inner product of each modulation pattern and the object. Notice that R of the DMD is a binary matrix consisting of the two values 0 or 1, thus B reflects the sum of the pixel spot brightnesses, and the probability of B involves the convolution of the probability of single pixel brightness (Rij of frame i at pixel j). To avoid the operation of convolution, a natural assumption is: the summation over k transparent pixels of the object mask is equivalent to the summation over k frames at a single pixel of the DMD. The brightness at all pixel spots has an identical distribution P(Rij). For transparent pixels, we may decompose into two sub-distributions conditional on B+ or B. In principle, these two conditional distributions can be calculated as follows. Suppose that the Laplace transformation of P(Rij) is Ψ (s). The convolution of distributions of k transparent pixels is given by Ψ k(s) whose Laplace transformation F(R) gives the distribution of the sum over k pixels. Two conditional distributions are obtained by dividing the support set into two subsets B+ and B. For example,

Taking the k-th root of the Laplace transformation of F+ (R), and then making a back-Laplace transformation will give the required conditional distribution for a single pixel, which will be denoted as P+ (Rij). However, we do not need this explicit form. As mentioned before, we have proved that the mean of is greater than the mean of P(R), so the distribution of contains much richer information than the mean only. Since spots on opaque pixels are independent of the bucket, the condition B > has no effect on the distribution at any opaque pixel. Thus, the distribution at an opaque pixel is still P(R).

To demonstrate this more clearly, we assume | | B+ | | 0 = u, and | | B| | 0 = v, and define SC as the complementary set of S, which stands for opaque pixels, then we have

Since and the reconstruction appears to be a positive image. In an ideal case, the distributions of Rij at pixel j have only two types and depending on pixel j being coded as “ 1” or “ 0” . By clustering distributions at all pixels into two clusters, we obtain the codes of the pixels. There are many standard methods for inferring two alternative populations or for clustering. Similarly, in , so and shows up as a negative image. Since has twice the amplitude, while , so the contrast of Δ G = + is much higher than that of both + and . Our analysis thus reveals the formation mechanism of the positive– negative image in CI. Since there is no need to multiply selected reference patterns by the bucket detector signal, the computation time of CI can be reduced. With hardware logical implementation, [23, 24] this approach can be even faster and represents a step forward towards real-time practical applications of correlation imaging. Despite these advantages, we still need to answer how the error depends on the sample size, and what kind of distribution the error obeys. Additionally, simple conditional averaging of reference patterns will lose much useful information.

A key advantage of our GIPCC method is the ability to fill the gap and solve these problems. For pairs from an uncorrelated bivariate normal distribution, the sampling distribution of Pearson’ s correlation coefficient follows the Student’ s t-distribution with degrees of freedom (m − 2): .[29] In practice, confidence intervals and hypothesis tests relating to z are usually carried out using the Fisher transformation[30]

where “ ln” is the natural logarithm function and “ arctanh” is the inverse hyperbolic function. Here (Rj, B) has a bivariate normal distribution, and the (Rij, Bi) pairs used to form rj are independent for i = 1, … , m, then z approximately follows a normal distribution with mean z and standard variation . The imaging is then converted to a statistical hypothesis test (with the null hypothesis r = 0).

For a binary object mask, the gray value takes on either “ 1” or “ 0” . Taking Fig. 3(b) for example, the number of sampling patterns is 11940, so the background error approximately follows a normal distribution with mean 0 and standard variation . A random variable UN(0, 0.00922) with a sample size of 160 × 160 = 25600 can be generated by the computer to be an approximation of the background error. The probability distribution of U is presented in Fig. 4(a). Suppose the probability of the rejection region (i.e., false positive) of such an error is P{U > λ } = α = 0.01, where α is the significance level, then the critical value λ is 0.0212, which is the “ 99th percentile" of this probability distribution. In Fig. 3(b), all the gray values that are smaller than this critical value are true negative and should be rejected, and the rest should be set to 1, which means a positive correlation relationship. Then we will acquire the final ghost image as shown in Fig. 4(b). By a comparison of PSNR values, the image quality of GIPCC is much better than those of other GI techniques as given in Figs. 3(c)– 3(g). The maximum difference in the PSNR between GIPCC and CI is more than 8 dB, i.e., corresponding to a maximum MSE difference of about 6.

Fig. 4. Results of hypothesis testing. (a) Probability distribution of the error samples. The location of the critical value is identified by the red dashed line. (b) Retrieved image based on statistical hypothesis testing, with a PSNR of 16.301 dB.

In the model of independent pixels, the correlation between a transparent pixel and the bucket is r(Rt, B) ∼ t− 1/2, where t is the total number of transparent pixels. If r is small, Fisher’ s transformed variable is roughly Ft− 1/2. Since the variance of Fisher’ s variable is σ m− 1/2, where n is the total number of sampling frames, it is required that F(r) ≫ σ , or mt. Therefore, if the object is sparse (like the film used in our experiment), the number of measurements in GIPCC will be dramatically reduced. To further demonstrate this advantage of GIPCC, the PSNR values of GIPCC and other conventional GI techniques against the number of measurements are given in Fig. 5. We can see clearly that the image quality obtained by various methods is approximately proportional to the number of measurements. The PSNRs of GIPCC are always better than other GI approaches, thus the number of measurements in GIPCC is much smaller than that in other GI methods for the same PSNR value, which agrees with our theory.

Fig. 5. PSNR values of GIPCC and other conventional GI techniques versus the number of measurements.

Now let us analyze the image formation in our GIPCC method. Without loss of generality, consider a and b are of zero mean and unity variance. Then, the correlation between a and b is . Define

where b+ ≥ 0 and the conditional mean values as

Therefore,

Thus, either both ā ± vanish, or they have opposite signs. Similarly, from 〈 b〉 = 0 we have

We now prove by reduction to absurdity that if r = 0 then ā + = ā = 0. By representing (ā + , ā ) and () as points in a plane, they fall on the same line passing the origin (0, 0). Obviously, . Since both a and b have the same variance we have i) and , or ii) and , except for ā + = ā = 0. Now r = 0 implies that under the assumption that r = 0 and ā + , ā ≠ 0, leading to a contradiction. Thus, r = 0 implies that ā + = ā = 0. Under the linear model b = ρ a + ε , ρ > 0 implies that and , that is, under the assumption that r > 0, if , then ā + > 〈 a〉 , and if , then ā < 〈 a〉 , which indicates transparent pixels. Therefore, if the Pearson correlation coefficient r between Rj and B is positive, we must have , and vice versa. It actually indicates how GIPCC generates a ghost image.

4. Conclusion

In conclusion, we have improved the correspondence imaging technique by developing an alternative method to reconstruct the image, whereby each pixel of the image is acquired by calculating a Pearson correlation coefficient between the bucket intensity and the brightness at each pixel of the DMD modulation patterns. This method illustrates how the error depends on the sample size and what kind of distribution the error obeys, and thus can be seen as a more rigorous and general approach compared with the original correspondence imaging scheme. We have experimentally demonstrated our method in a computational ghost imaging setup with thermal light, and have obtained a much better peak signal-to-noise ratio compared with other ghost imaging approaches. Furthermore, we have provided a theoretical analysis and intuitive interpretation of the formation of the positive– negative image in correspondence imaging. This new protocol offers a general approach applicable to all ghost imaging techniques. The applications include but are not limited to remote sensing, positioning, and imaging.

Acknowledgment

We are grateful to Wei-Mou Zheng for the illuminating discussion on the topic presented here. We warmly acknowledge Kai-Hong Luo and Ling-An Wu for providing suggestions, encouragement, and helpful discussion.

Reference
1 Strekalov D V, Sergienko A V, Klyshko D N and Shih Y H 1995 Phys. Rev. Lett. 74 p3600 DOI:10.1103/PhysRevLett.74.3600 [Cited within:1] [JCR: 7.943]
2 Pittman T B, Shih Y H, Strekalov D V and Sergienko A V 1995 Phys. Rev. A 52 R3429 DOI:10.1103/PhysRevA.52.R3429 [Cited within:1] [JCR: 3.042]
3 Abouraddy A F, Saleh B E A, Sergienko A V and Teich M C 2001 Phys. Rev. Lett. 87 123602 DOI:10.1103/PhysRevLett.87.123602 [Cited within:1] [JCR: 7.943]
4 Gatti A, Brambilla E, Bache M and Lugiato L A 2004 Phys. Rev. Lett. 93 093602 DOI:10.1103/PhysRevLett.93.093602 [Cited within:1] [JCR: 7.943]
5 Bennink R S, Bentley S J and Boyd R W 2002 Phys. Rev. Lett. 89 113601 DOI:10.1103/PhysRevLett.89.113601 [Cited within:1] [JCR: 7.943]
6 Ferri F, Magatti D, Gatti A, Bache M, Brambilla E and Lugiato L A 2005 Phys. Rev. Lett. 94 183602 DOI:10.1103/PhysRevLett.94.183602 [Cited within:1] [JCR: 7.943]
7 Zhang D, Zhai Y H, Wu L A and Chen X H 2005 Opt. Lett. 30 2354 DOI:10.1364/OL.30.002354 [Cited within:1]
8 Scarcelli G, Berardi V and Shih Y 2006 Phys. Rev. Lett. 96 063602 DOI:10.1103/PhysRevLett.96.063602 [Cited within:1] [JCR: 7.943]
9 Gatti A, Bondani M, Lugiato L A, Paris M G A and Fabre C 2007 Phys. Rev. Lett. 98 039301 DOI:10.1103/PhysRevLett.98.039301 [Cited within:1] [JCR: 7.943]
10 Shapiro J H 2008 Phys. Rev. A 78 061802 DOI:10.1103/PhysRevA.78.061802 [Cited within:1] [JCR: 3.042]
11 Katz O, Bromberg Y and Silberberg Y 2009 Appl. Phys. Lett. 95 131110 DOI:10.1063/1.3238296 [Cited within:1] [JCR: 3.794]
12 Ferri F, Magatti D, Lugiato L A and Gatti A 2010 Phys. Rev. Lett. 104 253603 DOI:10.1103/PhysRevLett.104.253603 [Cited within:1] [JCR: 7.943]
13 Yu W K, Li S, Yao X R, Liu X F, Wu L A and Zhai G J 2013 Appl. Opt. 52 p7882 DOI:10.1364/AO.52.007882 [Cited within:1] [JCR: 1.689]
14 Yu W K, Li M F, Yao X R, Liu X F, Wu L A and Zhai G J 2014 Opt. Express 22 7133 DOI:10.1364/OE.22.007133 [Cited within:1] [JCR: 3.546]
15 Liu X F, Chen X H, Yao X R, Yu W K, Zhai G J and Wu L A 2014 Opt. Lett. 39 2314 DOI:10.1364/OL.39.002314 [Cited within:1] [JCR: 3.385]
16 Wu L A and Luo K H 2011 AIP Conf. Proc. 1384 223 [Cited within:1]
17 Luo K H, Huang B Q, Zheng W M and Wu L A 2012 Chin. Phys. Lett. 29 074216 DOI:10.1088/0256-307X/29/7/074216 [Cited within:1] [JCR: 0.811] [CJCR: 0.4541]
18 Wen J M 2011arXiv: 1101. 4869v1 [Cited within:2]
19 Wen J M 2012 J. Opt. Soc. Am. A 29 1906 DOI:10.1364/JOSAA.29.001906 [Cited within:2] [JCR: 1.665]
20 Meyers R E, Deacon K S and Shih Y 2012 Appl. Phys. Lett. 100 131114 DOI:10.1063/1.3698158 [Cited within:2] [JCR: 3.794]
21 Yao Y P, Wan R G, Xue Y L, Zhang S W and Zhang T Y 2013 Acta Phys. Sin. 62 154201(in Chinese) DOI:10.7498/aps.62.154201 [Cited within:2] [JCR: 1.016] [CJCR: 1.691]
22 Liu X F, Yao X R, Li M F, Yu W K, Chen X H, Sun Z B, Wu L A and Zhai G J 2013 Acta Phys. Sin. 62 184205 (in Chinese) DOI:10.7498/aps.62.184205 [Cited within:1] [JCR: 1.016] [CJCR: 1.691]
23 Li M F, Zhang Y R, Luo K H, Wu L A and Fan H 2013 Phy. Rev. A 87 033813 DOI:10.1103/PhysRevA.87.033813 [Cited within:2]
24 Li M F, Zhang Y R, Liu X F, Yao X R, Luo K H, Fan H and Wu L A 2013 Appl. Phys. Lett. 103 211119 DOI:10.1063/1.4832328 [Cited within:1] [JCR: 3.794]
25 Bai X, Li Y Q and Zhao S M 2013 Acta Phys. Sin. 62 044209(in Chinese) DOI:10.7498/aps.62.044209 [Cited within:1] [JCR: 1.016] [CJCR: 1.691]
26 Zhao S M and Zhuang P 2014 Chin. Phys. B 23 054203 DOI:10.1088/1674-1056/23/5/054203 [Cited within:1] [JCR: 1.148] [CJCR: 1.2429]
27 Pearson K 1895 Proceedings of the Royal Society of London 58 240 DOI:10.1098/rspl.1895.0041 [Cited within:1]
28 Elad M 2010 Sparse and Redundant Representations: from Theory to Applications in Signal and Image ProcessingSpringer [Cited within:1]
29 Rahman N A 1968 A Course in Theoretical StatisticsCharles Griffin and Company [Cited within:1]
30 Fisher R A 1921 Metron 1 3 [Cited within:1]