DOI QR코드

DOI QR Code

Optimal Scheme of Retinal Image Enhancement using Curvelet Transform and Quantum Genetic Algorithm

  • Wang, Zhixiao (School of Electronic and Information Engineering, Xi'an Jiaotong University) ;
  • Xu, Xuebin (School of Electronic and Information Engineering, Xi'an Jiaotong University) ;
  • Yan, Wenyao (Xi'an Innovation College, Yan'an University) ;
  • Wei, Wei (School of Computer Science and Engineering, Xi'an University of Technology) ;
  • Li, Junhuai (School of Computer Science and Engineering, Xi'an University of Technology) ;
  • Zhang, Deyun (School of Electronic and Information Engineering, Xi'an Jiaotong University)
  • Received : 2013.09.09
  • Accepted : 2013.10.28
  • Published : 2013.11.30

Abstract

A new optimal scheme based on curvelet transform is proposed for retinal image enhancement (RIE) using real-coded quantum genetic algorithm. Curvelet transform has better performance in representing edges than classical wavelet transform for its anisotropy and directional decomposition capabilities. For more precise reconstruction and better visualization, curvelet coefficients in corresponding subbands are modified by using a nonlinear enhancement mapping function. An automatic method is presented for selecting optimal parameter settings of the nonlinear mapping function via quantum genetic search strategy. The performance measures used in this paper provide some quantitative comparison among different RIE methods. The proposed method is tested on the DRIVE and STARE retinal databases and compared with some popular image enhancement methods. The experimental results demonstrate that proposed method can provide superior enhanced retinal image in terms of several image quantitative evaluation indexes.

Keywords

1. Introduction

The retina is an important field in the medical treatment of pathologies [1-4]. By observing the tortuosity changes of blood vessels in the retina, clinicians can diagnose many diseases, among them hypertension, arteriosclerosis, and blindness caused by diabetes, and collect and analyze retinal symptoms in order to develop relevant treatments [2]. It is thus very important for the clinicians to be able to clearly detect, appreciate and recognize the lesions among the numerous capillary vessels and optic nerve present in the image. However, the retinal images acquired with a fundus camera often have low gray level contrast and dynamic range. This problem may seriously affect the diagnostic procedure and its results, because lesions and vessels in some areas of the field of view (FOV) are hardly visible to the eye specialist.

It is no doubtless that image enhancement is a necessary preprocessing step if the original retinal image is not a good candidate for the clinicians [6-9]. Currently, most researchers focused on the retinal image segmentation [1-5, 10-14]. Little work has been carried out in the aspect of retinal image enhancement.

To improve the retinal image’s quality, several techniques have been proposed [6-9].The classic one is Histogram Equalization. More complex methods such as Local Normalization [3], matched filters method [13], Adaptive Histogram Equalization [14], and Lapacian method have been proposed to enhance contrast [15]. It is a pity that Local Normalization amplifies the noise strongly and has bad visual results. The matched filters methods are better at enhancing local contrast, especially for blood vessel in a small area, but for the whole image, the computation is very difficult due to needing many matched filters [8,13].

In the last decade, wavelet transform has been widely used in the medical image processing [17-21]. Fu [17,18] proposed a wavelet-based histogram equalization to enhance sonogram images. Laine used wavelet transform to enhance the microcalcifications in mammograms [19]. Wavelet transform is a type of multiscale analysis (MGA) that decomposes input signal into high frequency detail and low frequency approximation components at various resolutions [8]. To enhance features, the selected wavelet high frequency coefficients are multiplied by an adaptive gain value. The image is then enhanced by reconstructing the modified wavelet coefficients.

It is known that wavelet has good performance at representing point singularities [20]. The traditional orthogonal wavelet transform has wavelets with chiefly vertical, chiefly horizontal and chiefly diagonal orientations [21]. Unluckily, in higher dimensions because wavelet neglects the geometric properties of objects with edges and does not explore the regularity of the edge curves, which could not generate satisfying results.

Therefore, Signal Sparse Representation Theory (SSRT) has been developed rapidly in recent years [21-25]. Sparse representation is based on the idea that a signal can be constructed as a linear combination of atoms from a dictionary, where the number of atoms in the dictionary is larger than the signal dimension. The hybrid dictionary can be constructed by integrating the multiscale Gabor functions, wavelets, libraries of windowed cosines with a range of different widths and locations, and multiscale windowed ridgelets etc. Several SSRT-based approaches have been proposed, such as ridgelet, curvelet and bandelet etc. [26-29], and some methods based on spectral clustering [24,25]. Li et al. successively proposed two unsupervised feature selection algorithms, Nonnegative Discriminative Feature Selection (NDFS) [24] and Clustering-Guided Sparse Structural Learning (CGSSL) [25], which are both relied on spectral clustering, for cutting down data dimensionality. In NDFS, discriminative information is efficiently handled through the joint learning of nonnegative spectral analysis and linear regression with -norm regularization [24]. CGSSL which is superior to NDFS integrates nonnegative spectral clustering and sparse structural analysis into a joint framework [25]. Li and Yang studied a multifocus image fusion method based on curvelet and wavelet transform by considering the complementary property of the two different multiresolution analysis method [26]. Candès et al. proposed two novel discrete curvelet transforms for the second-generation curvelets in two and three dimensions, which is widely applied in image processing including denoising and also is simpler, faster, and less redundant [27]. Mandal et al. probed into a novel face recognition method by using curvelet transform and different dimensionality reduction tools, which is an earlier exploration of curvelet subspaces [28]. Fu and Zhao proposed an infrared and visible images fusion method by using the second generation curvelet transform, in which low and high frequency subband coefficients were processed [29]. Different from the previous work[26-29], we exploit a fresh retinal image enhancement method combined curvelet transform and quantum genetic algorithm. Meanwhile, another difference is optimization method is proposed to find optimal parameters which is used in image enhancement to improve the texture of retinal image.

In this paper, we present a novel automatic retinal image enhancement approach based on the second-generation curvelet transform and real-coded guantum genetic algorithm. Herein, the real-coded quantum genetic algorithm is used to choose the appropriate parameters of the nonlinear mapping function. The image fusion method is used to reduce the effect of Gibbs phenomenon. We finally combine quantitative assessment and visual evaluation to test the performance of the proposed method. The experimental results show encouraging improvement and achieve better visual quality than other state-of-the-art methods such as Local Normalization (LN) [3] , Adaptive Histogram Equalization (AHE) [13] , Laplacian [14] and DWT [17-19] methods.

This paper is organized as follows. Section 1 gives a brief review of the current retinal image enhancement techniques. Section 2 describes the second-generation curvelet transform. Section 3 gives out the retinal enhancement algorithm based on curvelet transform. Section 4 presents an automated method for selecting optimal parameter settings of the nonlinear mapping function via quantum genetic search strategy. The experimental results and performance evaluation on two well-known databases are given in Section 5. The last section summarizes the paper and the conclusions are drawn finally.

 

2. Second-generation Curvelet Transform

Candès et al. proposed the second-generation curvelet transform in 2006 [27]. Curvelet transform is a new sparse representation suited for objects which are smooth away from discontinuities across curves. Curvelet differs from wavelet and related systems, and it takes the form of basis elements, which exhibit a very high directional sensitivity and are highly anisotropic [26]. In this section, we briefly review the implementation of the second-generation curvelet transform which is simpler, faster, and less redundant [26,27].

Assume that we work throughout in two dimensions, i.e., R2 . Denote x as spatial variable, denote ω as a frequency domain variable, and with r and θ polar coordinates in the frequency domain [26-29].We define a pair of windows W(r) and V(t), called the “radial window” and “angular window”. The frequency window Uj is defined in the Fourier domain by:

where ⎿j / 2⏌ is the integer part of j/2 .

We can define a “mother” curvelet as φj(x) , Its Fourier transform then all curvelets at scale 2−j are obtained by translations and rotations of .

Introduce the equispaced sequence of rotation angles θl = 2π×2⎿j / 2⏌( with l= 0, 1, 2…, 0 ≤ θ ˂ 2π ) and the sequence of translation parameters k = (k1, k2) ∈z2 .

Then the curvelets at orientation θl , scale 2−j and position can be defined as follows:

where Rθ is the rotation by θ radians and its inverse Rθ-1 . The curvelet transform and the curvelet transform in the frequency domain are defined as follows:

Fig. 1 shows curvelet spatial domain and frequency domain.

Fig. 1Curvelet spatial domain and frequency domain

Introduce the set of equispaced slopes: tanθl = l×2 ⎿j / 2⏌, l= -2⎿j / 2⏌,...,2⎿j / 2⏌-1, and define

where Sθ is the shear matrix :

Then the discrete curvelet transform is defined as follows:

where τ ∈ (k1 × 2−j , k2 × 2−j/2 .

The discrete curvelet transform is invertible. In this paper, we use fast discrete curvelet transform (FDCT) via wrapping [27]. Please refer to [27] for details.

 

3. Retinal Image Enhancement via Curvelet Transform

Fig. 2 displays a schematic diagram for the proposed enhancement scheme.

Fig. 2.A schematic diagram for the retinal image enhancement

3.1. Retinal Image Preprocessing

The retinal images captured from camera need to be transformed from RGB to grayscale [8,10-16]. The green channel of colour retinal images formatted as an RGB image gives the highest contrast between vessels and background, thus this channel is a good choice for contrast enhancement. We first extract the green channel of retinal images. Fig. 3 shows a colour retinal image, the three channel and their histograms, respectively. It is easily to show that blue and red channels are either too dark or too bright.

Fig. 3.Top: The colour retinal image and its component of each channel, bottom: corresponding histogram of the four images. (a)The colour retinal image;(b)Red channel;(c)Green Channel;(d)Blue Channel

3.2. Curvelet Decomposition

In this stage, the grayscale retinal image will be decomposed by using digital curvelet transform.

Here we use five scales of decomposition for curvelet transform [26]. Assume that G represents a retinal image sample. G is decomposed by curvelet transform into curvelet coefficients C. Assume that Ck,l represents the kth scale and lth orientation of the curvelet coefficients C, where k={1,2,3,4,5}. Here l = 1 when k = 1. l ={1, . . . ,8} when k = 2. l = {1, . . . ,16} when k = 3 or k = 4. l = {1, . . . ,32} when k = 5.

Fig. 4 shows an example of the frequency decomposition achieved by curvelet transform. It depicts the curvelet coefficients of one retinal image using five scales of decomposition and 1,8,16,16,32 directions.

Fig. 4.Curvelet coefficients of one retinal image(5 scales; 1,8,16,16,32 directions)

3.3. Nonlinear Mapping Function

To modify the high frequency coefficients Ck,l (k={2,3,4,5}), we propose a nonlinear mapping enhancement function as follows.

where 0.01 ˂ b ≤ 1, 1 ˂ c ≤ 20, 0.1 ˂ k1 ≤ 0.4, 0.6 ˂ k2 ≤ 1.

Here tk,l is the threshold. sigm(u) is the sigmoid function: sigm(u) = 1/(e−u+1) .

The threshold value tk,l can be calculated by [21]:

where is the noise variance at the scale k, is the signal variance of the coefficients Ck,l .

σn(k) and are defined as follows:

where p×q is the size of the coefficients Ck,l .

Fig. 5 shows a plot representing the nonlinear mapping function. The parameters are b = 0.4, c = 10 , k1 = 0.4, k2 = 0.8 . The threshold tk,l = 0.3 .

Fig. 5.The nonlinear mapping function

3.4. Retinal Image Fusion

By successively performing the inverse curvelet transform to the modified coefficients at the decomposition subbands , the enhanced retinal image GI can be reconstructed.

A worry for the proposed scheme is that the effect of Gibbs phenomenon. We use an image fusion method to reduce the effect [26]. Here we use the minimum fusion rule.

where G is the original image , G’ is the fused retinal image.

Fig. 5 shows for the enhanced retinal image GI and the fused retinal image G’. The image of Fig. 6(a) has annular shadow. In Fig. 6(b), the annular shadow has been removed.

Fig. 6.a)The enhanced retinal image; b)The fused retinal image

 

4. Multi-objective Optimization Model For Nonlinear Contrast Enhancement

In this section, an automatic method is presented for selecting optimal parameter settings of the nonlinear mapping function via real-coded quantum genetic search strategy. Firstly, we construct a multi-objective optimization model for nonlinear contrast enhancement based on retinal image quality measures. Then we use real-coded quantum genetic algorithm (RCQGA) to solve the optimization problem.

4.1 Retinal Image Quality Measures

(1)Entropy

Entropy is known to be a measure of the amount of uncertainty about the image. It is given by

where L is the number of graylevels and note that

Larger H value indicates the better enhancement result.

(2)Spatial Frequency (SF)

Spatial frequency (SF) is defined as:

where RF and CF are the row and column frequencies and defined as:

where G is an image with size of m×n pixels. m×n is the image size. Larger SF value indicates the better enhancement result.

(3) Root Mean Square Error

Root mean square error (RMSE) is used to evaluate the effect of retinal image enhancement. RMSE is defined as:

where m×n is the image size, G and G’ are the original image and enhanced image respectively, with size of m×n pixels. Larger RMSE value indicates the better image quality.

(4) Peak Signal to Noise Ratio (PSNR)

Peak signal to noise ratio (PSNR) is defined as:

Larger PSNR value indicates the lower noise.

(5) Structural Content (SC)

Structural content (SC) is defined as:

A smaller SC value is preferred.

4.2 Multi-objective Optimization Model for Nonlinear Contrast Enhancement

A multi-objective optimization model for nonlinear contrast enhancement is as follows.

Objective:

Subject to:

0.01 ˂ b ≤ 1, 1.0 ˂ c ≤ 20

0.1 ˂ k1 ≤ 0.4 , 0.6 ˂ k2 ≤ 1

4.3 Real-coded Quantum Genetic Algorithm (RCQGA)

Quantum genetic algorithm (QGA) is a probabilistic searching algorithm which exploits the power of quantum computation in order to accelerate genetic procedures [32-34]. The smallest unit of information stored in two-state quantum computer is called a quantum bit, which may be in the 1 state, or in the 0 state, or in any superposition of the two [35]. The state of a quantum bit can be represented as:

where |α|2 + |β|2 = 1. α and β are complex numbers that specify the probability amplitudes of the corresponding states.

Q-bit representation has the advantage that it is able to represent a linear superposition of states [32-35].A Q-bit individual as a string of m Q-bits is defined as:

For update, a quantum rotation gate R(Δθ) is usually adopted in compliance with practical optimization problems [32].

where Δθ is the rotation angle of each Q-bit.

Real number encoding has been confirmed to have better performance than either binary or gray encoding for Multi-parameters optimization problems [35]. Thus we use real-coded quantum genetic algorithm (RCQGA) in this paper.

RCQGA adopts a multi-bit instead of Q-bit to denote a real number [35,36]. Randomly generate uniformity real number list makes an initial chromosome.

where xi is a real number obeying uniformity. xmin ≤ xi ≤ xmax ≤ . θi denotes the variable phase angle.

So every chromosome's information can be denoted in phase space and real number space at one time.

Probability crossover and Chaotic mutation are adopted to make the best of Q-bit coherence and chaos in the evolutionary process of RCQGA [32-36].

Assume that some or other generation reserve the best individual and phase angle denoted by B(s) and θ( s ) . This generation population is denoted by Li( s ) , while corresponding phase angle is denoted by θi( s ) . Probability crossover is employed to generate the next generation [33-35].

We limit disturb currently generation real number chromosome by chaotic sequence Y. Here the Logist mapping is used for generate chaotic sequence [35].

Limiting amplitude is adjusted by the fitness value [36]. Suppose we solve the global minimal value function, so the disturb sequence amplitude can be denoted by:

In this way, we can achieve all population mutation by Eqs. (30) ,(31) and (32) .

The fitness function is defined as follows:

Hereγ =0.2 and η =0.1.

The detailed procedure of RCQGA is described as follows:

1.Procedure of the RCQGA

2.Begin

3. current generation s ←1

4. initialize T(s) represent the parameters b,c, k1 and k2

5. make L(s) by observing the states of T(s)

6. calculate fitness function F(x) and evaluate L(s)

7. store the best individual among L(s) and its fitness value

8. while (not the stopping condition)

9. begin

10. s ← s + 1

11. apply Eqs. (28) and (29) to perform selection and quantum crossover

12. apply Eqs. (30) , (31) and (32) to perform chaotic mutation

13. calculate fitness function F(x) and evaluate L(s)

14. store the best individual among L(s) and its fitness value

15. end

16.output the optimal parameters b,c, k1 , k2 and the optimal fitness value

17.End

 

5. Experimental results

The experiments are performed on two well-known retinal databases: DRIVE database and STARE database. In this paper, five objective evaluation measures, H, SF, RMSE, PSNR, and SC which have been proven to be validated specially, are considered to quantitatively evaluate the retinal image enhancement performance.

5.1 Experimental Results from the DRIVE Database

The DRIVE (Digital Retinal Images for Vessel Extraction) database is a public retinal database [37]. The DRIVE database consists of 40 RGB color images of the retina. The images are of size 565×584 pixels, 8 bits per color channel, in LZW compressed TIFF format. These images were originally captured from a Canon CR5 nonmydriatic 3 charge-coupled device (CCD) camera at 45° field of view (FOV), and were initially saved in JPEG-format.

The original retinal image (Green channel) and the enhanced images with different enhancement methods based on Local Normalization (LN), Adaptive Histogram Equalization (AHE), Laplacian, DWT and our proposed methods are shown in Fig. 7. Here we use the DWT with the same nonlinear mapping function to compare with the curvelet transform. The wavelet transform uses Daubechies bi-orthogonal 4-4 wavelet and a four-level decomposition.

The objective evaluations on the enhanced results of the proposed method and other comparable four approaches for the retinal images are listed in Table 1. We can see from Table 1 that the proposed method takes almost all the best objective evaluations (besides the SF, RMSE, SC value of HE method), which is obviously better than other four methods.

Fig. 7.Results on an image from DRIVE database: (a) original image, (b) local normalization (LN), (c) adaptive histogram equalization (AHE), (d) Lapacian, (e) Wavelet, (f) Curvelet. The parameters are b = 0.015, c = 19.96, k1 = 0.39 , k2 = 0.97 .

We can see that the results of our method exhibit the best visual quality. Fig. 7(b) shows the local normalizaiton enhancement. Fig. 7(b) has a bad visual quality. Fig. 7(c) shows the Adaptive Histogram Equalization enhancement. It is a pity that Fig. 7(c) has lower brightness. Fig. 7(d) shows the Lapacian enhancement. Fig. 7(d) also has a bad visual quality. Fig. 6(e) and (f) show the comparison of wavelet and curvelet transform enhancement. It seems that the wavelet is inferior to curvelet, we can see some blood vessels in Fig. 7(f) are more clearer than Fig. 7(e).

Table 1.Quantitative assessments on the DRIVE database

Obviously, the curvelet-based method provides the best performance for retinal image enhancement.

5.2 Experimental Results from the STARE Database

The STARE database is a public retinal database [38]. The retinal images of the STARE database were captured using a TopCon TRV-50 fundus camera at 35° FOV, and afterwards digitized to 700×605 pixels, 8 bits per color channel.

The original retinal image (Green channel) and the enhanced images with different enhancement methods based on Local Normalization (LN), Adaptive Histogram Equalization (AHE), Laplacian, DWT and our proposed methods are shown in Fig. 8.

The objective evaluations on the enhanced results of the proposed method and other comparable four approaches for the retinal images are listed in Table 2.

Fig. 8.Results on an image from STARE database: (a) original image, (b) local normalization (LN), (c) adaptive histogram equalization (AHE), (d) Lapacian, (e) Wavelet, (f) Curvelet. The parameters are b = 0.013 , c = 19.44 , k1 = 0.39 , k2 = 0.97 .

Table 2.Quantitative assessments on the STARE database

From Fig. 8 and Table 2, we can see that the performance of the proposed method is better than Local Normalization (LN) [3] , Adaptive Histogram Equalization (AHE) [13] , Laplacian [14] and DWT [17-19] methods. The H, SF, PSNR, SC value (4 indexes) of our method are better than LN, AHE and Laplacian method. The H, SF, RMSE, SC value (4 indexes) of our method is better than DWT method. The results of our method exhibit the best visual quality. The blood vessels in Fig. 8(f) are more clearer than LN, AHE, Laplacian and DWT methods.

5.3 More Experiments

In the experiments, we randomly choose 40 retinal images from the DRIVE database and STARE database for testing (20 images from DRIVE and 20 images from STARE). In order to evaluate the performance of proposed method, we use the statistics data as the final experimental result. The results are shown in Table 3. We report the numbers of better value obtained by the proposed method than LN, AHE, Laplacian and DWT methods.

Table 3.Statistics results

For all retinal images, our method is superior to the latter four methods in 3 indexes. For 33 retinal images, the proposed method is superior to the latter four methods. in 4 indexes. The expensive experiments demonstrate that proposed method provides superior enhanced performance.

 

6. Conclusions

This paper presents a novel retinal image enhancement approach based on curvelet transform and real-coded quantum genetic algorithm. The enhancement process is conducted by fast discrete curvelet transform (FDCT) via wrapping. An automatic method is proposed for selecting optimal parameter settings of the nonlinear mapping function via quantum genetic search strategy. The DRIVE database and the STARE database are used to test the performance of the proposed scheme. The experiments demonstrate that the proposed method provides superior enhanced image in terms of the pertained image quantitative evaluation indexes. However, one weak point of the proposed scheme is that the computation load is a bit heavier than wavelet-based methods.

In conclusion, the proposed approach is a novel attempt to apply curvelet transform for retinal image enhancement, and can also be applied to other problems such as medical image segmentation, medical image fusion, etc.

References

  1. Niemeijer M, Ginneken B and Staal J J, "Automatic detection of red lesions in digital color fundus photographs," IEEE Trans. Med. Imaging, vol. 24, no. 5, pp. 584-592, 2005. https://doi.org/10.1109/TMI.2005.843738
  2. Intajag S, Tipsuwanporn V and Chatthai R, "Retinal Image Enhancement in Multi-Mode Histogram," in Proc. of World Congress on Computer Science and Information Engineering, pp. 745-749, March 31-April 2, 2009.
  3. Staal J J, Abramoff M D, and Niemeijer M et al, "Ridge based vessel segmentation in color images of the retina," IEEE Trans. Med. Imaging, vol. 23, no. 4 pp. 501-509, 2004. https://doi.org/10.1109/TMI.2004.825627
  4. Niemeijer M, Staal J J and Ginneken B et al, "Comparative study of retinal vessel segmentation methods on a new publicly available database," in Proc. of SPIE Medical Imaging, pp. 648-656, February 15-20, 2004.
  5. Jelinek H F, Cree M J and Leandro J J et al, "Automated segmentation of retinal blood vessels and identification of proliferative diabetic retinopathy," JOSA A, vol. 24, no. 5, pp. 1448-1456, 2007. https://doi.org/10.1364/JOSAA.24.001448
  6. Salinas H M, Fernandez D C, "Comparison of PDE-based nonlinear diffusion approaches for image enhancement and denoising in optical coherence tomography," IEEE Trans. Med. Imaging, vol. 26, no. 6, pp. 761-771, 2007. https://doi.org/10.1109/TMI.2006.887375
  7. Karras1 D A, Mertzios G B, "New PDE-based methods for image enhancement using SOM and Bayesian inference in various discretization schemes," Measurement Science and Technology, vol. 20, no. 10, pp. 104012, 2009. https://doi.org/10.1088/0957-0233/20/10/104012
  8. Feng P, Pan Y J and Wei B,et.al, "Enhancing retinal image by the Contourlet transform," Pattern Recognition Letters, vol. 28, no. 4, pp. 516-522, 2007. https://doi.org/10.1016/j.patrec.2006.09.007
  9. Lin T S, Du M H, and Xu J T, "The Preprocessing of subtraction and the enhancement for biomedical image of retinal blood vessels," J. Biomed. Eng, vol. 20, no. 1, pp. 56-59, 2003.
  10. Martinez P M, Hughes A D and Thom S A, "Segmentation of blood vessels from red-free and fluorescein retinal images," Med. Image. Analysis, vol. 11, no. 1, pp. 47-61, 2007. https://doi.org/10.1016/j.media.2006.11.004
  11. Perfetti R, Ricci E and Casali D, "Cellular neural networks with virtual template expansion for retinal vessel segmentation," IEEE Trans. Circuits. Systems, vol. 54, no. 2, pp. 141-145, 2007.
  12. Wang L, Bhalerao A and Wilson R, "Analysis of retinal vasculature using a multiresolution Hermite model," IEEE Trans. Med. Imaging, vol. 26, no. 2, pp. 137-152, 2007. https://doi.org/10.1109/TMI.2006.889732
  13. Sofka M, Stewart C V, "Retinal vessel centerline extraction using multiscale matched filters, confidence and edge measures," IEEE Trans. Med. Imaging, vol. 25, no. 12, pp. 1531-1546, 2006. https://doi.org/10.1109/TMI.2006.884190
  14. WANG Zhiming, TAO Jianhua, "A Fast Implementation of Adaptive Histogram Equalization, " in Proc. of ICSP, pp.16-20, 2006.
  15. Sylvain Paris, Samuel W. Hasinoff and Jan Kautz, "Local Laplacian Filters: Edge-aware Image Processing with a Laplacian Pyramid," ACM Transactions on Graphics, vol 30, no.4, pp. 1-11, 2011.
  16. Abdul-Karim M A, Roysam B and Dowell-Mesfin N M, "Automatic selection of parameters for vessel/neurite segmentation algorithms," IEEE Trans. Image. Process, vol. 14, no. 9, pp. 1338-1350, 2005. https://doi.org/10.1109/TIP.2005.852462
  17. Fu J C, Lien H C, and Wong S T C, "Wavelet-based histogram equalization enhancement of gastric sonogram images," Computer. Med. Imaging. Graph, vol. 24, no. 2, pp. 59-68, 2000. https://doi.org/10.1016/S0895-6111(00)00007-0
  18. Fu J C, Chai J.W and Wong S.T.C, "Wavelet-based enhancement for detection of left ventricular myocardial boundaries in magnetic resonance images," Magn. Reson. Imaging, vol. 18, no. 9, pp. 1135-1141, 2000. https://doi.org/10.1016/S0730-725X(00)00202-2
  19. Laine A, Schuler S and Fan J, "Mammographic feature enhancement by multiscale analysis," IEEE Trans. Med. Imaging, vol. 13, no. 4, pp. 725-740, 1994. https://doi.org/10.1109/42.363095
  20. Chang S G, Yu B, and Vetterli M, "Spatially adaptive wavelet thresholding with context modeling for image denoising," IEEE Trans. Image. Process, vol. 9, no. 9, pp. 1522-1531, 2000. https://doi.org/10.1109/83.862630
  21. Mandal T, Majumdar A and Wu Q.M, "Face Recognition by Curvelet Based Feature Extraction," in Proc. of International Conference on Intelligent Automation and Robotics, pp. 806-817, October 24-26, 2007.
  22. Elad M, Aharon M, "Image denoising via sparse and redundant representations over learned dictionaries," IEEE Trans. Image. Process, vol. 15, no. 12, pp. 3736-3745, 2006. https://doi.org/10.1109/TIP.2006.881969
  23. Huang K, Aviyente S 2007 "Sparse representation for signal classification" in Proc. of Neural Information Processing Systems, pp. 609-616, December 3-6, 2007.
  24. Z. Li, Y. Yang, J. Liu, X. Zhou, and H. Lu, "Unsupervised Feature Selection Using Nonnegative Spectral Analysis," in Proc. Twenty-Sixth AAAI, pp. 1026-1032, July 22-26, 2012.
  25. Z. Li, J. Liu, Y. Yang, X. Zhou, and H. Lu, "Clustering-Guided Sparse Structural Learning for Unsupervised Feature Selection", IEEE Transactions on Knowledge and Data Engineering, vol. PP, no. 99, pp. 1, 2013.
  26. Li S T, Yang B, "Multifocus image fusion by combining curvelet and wavelet transform," Pattern Recognition Letters, vol. 29, no. 9, pp. 1295-1301, 2008. https://doi.org/10.1016/j.patrec.2008.02.002
  27. Candes E J, Demanet L and Donoho D L, "Fast Discrete Curvelet Transforms," Multiscale Modeling and Simulation, vol. 5, no. 3, pp. 861-899, 2006. https://doi.org/10.1137/05064182X
  28. Mandal T, "Curvelet based face recognition via dimension reduction," Signal Processing, vol. 89, no. 12, pp. 2345-235, 2009. https://doi.org/10.1016/j.sigpro.2009.03.007
  29. Fu M Y, Zhao C, "Fusion of infrared and visible images based on the Second generation curvelet transform," Journal of Infrared and Milimeter Waves, vol. 28, no. 4, pp. 254-258, 2009. https://doi.org/10.3724/SP.J.1010.2009.00254
  30. Xie Z H, Liu G D and Wu S Q, "A fast infrared face recognition system using curvelet transformation," in Proc. of Inter Symp. Electro. Comm. Secu., pp. 145-49, May 22-24, 2009.
  31. Narasimha H, Can A and Roysam B, "Robust detection and classification of longitudinal changes in color retinal fundus images for monitoring diabetic retinopathy," IEEE Trans. Biomedical Engineering, vol. 53, no. 6, pp. 1084-1098, 2006. https://doi.org/10.1109/TBME.2005.863971
  32. Xing H L, Liu X, and Jin X, "A multi-granularity evolution based Quantum Genetic Algorithm for QoS multicast routing problem in WDM networks," Computer. Comm., vol. 32, no. 2, pp. 386-393, 2009. https://doi.org/10.1016/j.comcom.2008.11.009
  33. Malossini A, Blanzieri E, and Calarco T, "Quantum Genetic Optimization," IEEE Trans. Evol. Comp., vol. 12, no. 2, pp. 231-241, 2008. https://doi.org/10.1109/TEVC.2007.905006
  34. Li B, Wang L, "A hybrid quantum-inspired genetic algorithm for multi-objective flow shop scheduling," IEEE Trans Syst. Man. Cyber-Part B, vol. 37, no. 3, pp. 576-591, 2007. https://doi.org/10.1109/TSMCB.2006.887946
  35. Zhao S F, Xu G H and Tao F F, "Real-coded chaotic quantum-inspired genetic algorithm for training of fuzzy neural networks," Computers and Mathematics with Applications, vol. 57, no. 11-12, pp. 2009-2015, 2009. https://doi.org/10.1016/j.camwa.2008.10.048
  36. Z. Sheng and J. Wanlu, "A novel quantum genetic algorithm and its application," in Proc. of Eighth International Conference on Natural Computation (ICNC), pp. 613-617, May 29-31, 2012.
  37. DRIVE database.
  38. STARE database.

Cited by

  1. A quantum mechanics-based algorithm for vessel segmentation in retinal images vol.15, pp.6, 2016, https://doi.org/10.1007/s11128-016-1292-1
  2. Human Visual System based Automatic Underwater Image Enhancement in NSCT domain vol.10, pp.2, 2013, https://doi.org/10.3837/tiis.2016.02.022