Vous êtes sur la page 1sur 5

Global Journal of Advanced Engineering Technologies, Vol1, Issue-2, 2012

ISSN: 2277-6370

AN IMAGE FUSION USING WAVELET AND CURVELET TRANSFORMS


Smt.G. Mamatha(Phd) 1, L.Gayatri 2
1

Assistant Professor, Department of ECE, JNTUA College of Engineering, Anantapur, Andhra Pradesh, India 2 Department of ECE, JNTUA College of Engineering, Anantapur, Andhra Pradesh, India

Abstract: This paper presents wavelet and curvelet transform based approach for the fusion of digital image, magnetic resonance (MR) and computed tomography (CT) images. We looked at the selection principles about low and high frequency coefficients according to different frequency domain after Wavelet and the Curvelet Transform. In choosing the low-frequency and high frequency coefficients, the concept of local area variance and window property of pixels respectively.Some attempts have been proposed for the fusion of MR and CT images using the wavelet transform. The objective of the fusion of an MR image and CT images of the same organ is to obtain a single image containing as much information as possible about that organ for diagnosis. Since medical images have several objects and curved shapes, it is expected that the curvelet transform would be better in their fusion. The simulation results show the superiority of the curvelet transform to the wavelet transform in the fusion of digital image and MR and CT images from entropy, correlation coefficients and the RMS error points of view.

KeywordsImage

Processing, Image Wavelet Transform, Curvelet Transform

Fusion,

I. INTRODUCTION Image fusion is the process of merging two images of the same scene to form a single image with as much information as possible. Image fusion is important in many different image processing fields such as satellite imaging, remote sensing and medical imaging. The study in the field of image fusion has evolved to serve the advance in satellite imaging and then, it has been extended to the field of medical imaging. Several fusion algorithms have been proposed extending from the simple averaging to the curvelet transform. Algorithms such as the intensity, hue and saturation (IHS) algorithm and the wavelet fusion algorithm have proved to be successful in satellite image fusion. The IHS algorithm belongs to the family of color image fusion algorithms. The wavelet fusion algorithm has also succeeded in both satellite and medical image fusion applications[1,2]. The basic limitation of the wavelet fusion algorithm is

in the fusion of curved shapes. Thus, there is a need for another algorithm that can handle curved shapes efficiently. So, the application of the curvelet transform for curved object image fusion would result in better fusion efficiency. A few attempts of curvelet fusion have been made in the fusion of satellite images but no attempts have been made in the fusion of medical images [3,4]. The main objective of medical image is to obtain a high resolution image with as much details as possible for the sake of diagnosis. There are several medical imaging techniques such as the MR and the CT techniques [5]. Both techniques give special sophisticated characteristics of the organ to be imaged. So, it is expected that the fusion of the MR and the CT images of the same organ would result in an integrated image of much more details. Researchers have made few attempts for the fusion of the MR and the CT images. Most of these attempts are directed towards the application of the wavelet transform for this purpose. Due to the limited ability of the wavelet transform to deal with images having curved shapes, the application of the curvelet transform for MR and CT image fusion is presented in this paper[6,9].The curvelet transform is based on the segmentation of the whole image into small overlapping tiles and then, the ridgelet transform is applied to each tile. The purpose of the segmentation process is to approximate curved lines by small straight lines. The overlapping of tiles aims at avoiding edge effects. The ridgelet transform itself is a 1-D wavelet transform applied on the Radon transform of each tile, which is a tool of shape detection. The curvelet transform was firstly proposed for image denoising[10]. Some researchers tried to apply it in satellite image fusion .Because of its ability to deal with curved shapes, the application of the curvelet transform in medical image fusion would result in better fusion results than that obtained using the wavelet transform. II.EXISTING METHODS Wavelet Fusion: The most common form of transform type image fusion algorithms is the wavelet fusion algorithm due to its simplicity and its ability to preserve the time and frequency details of the images to be fused. 69

Global Journal of Advanced Engineering Technologies, Vol1, Issue-2, 2012

ISSN: 2277-6370

Fig1: Block diagram of Discrete Wavelet transform A schematic diagram of the wavelet fusion algorithm of two registered images P1 (x1, x2) and P2 (x1, x2) is depicted in Fig.1.It can be represented by the following equation: I(x1, x2) = W1((W(P1 (x1, x2)),W(P2 (x1, x2)))) (1) Where W, W1and are the wavelet transform operator, the inverse wavelet transform operator and the fusion rule, respectively. There are several wavelet fusion rules that can be used for the selection of wavelet coefficients from the wavelet transforms of the images to be fused. The most frequently used rule is the maximum frequency rule which selects the coefficients that have the maximum absolute values. The wavelet transform concentrates on representing the image in multi-scale and it is appropriate to represent linear edges. For curved edges, the accuracy of edge localization in the wavelet transform is low. So, there is a need for an alternative approach which has a high accuracy of curve localization such as the curvelet transform. III.PROPOSED METHOD 1. Curvelet Transform: The curvelet transform has evolved as a tool for the representation of curved shapes in graphical applications. Then, it was extended to the fields of edge detection and image denoising [10]. Recently, some authors have proposed the application of the curvelet transform in image fusion. The algorithm of the curvelet transform of an image P can be summarized in the following steps: A) The image P is split up into three subbands 1,2 and P3 using the additive wavelet transform. B) Tiling is performed on the subbands 1 and 2. C) The discrete Ridgelet transform is performed on each tile of the subbands 1 and 2. A schematic diagram of the curvelet transform is depicted in Fig.2
P3

A. Subband Filtering: The purpose of this step is to decompose the image into additive components; each of which is a Subband of that image. This step isolates the different frequency components of the image into different planes without down sampling as in the traditional wavelet transform. Given an image P, it is possible to construct the sequence of approximations:f1(P)=P1 , f2(P)=P2, f3(P)=P3,. fn(P)=Pn where n is an integer which is preferred to be equal to 3. To construct this sequence, successive convolutions with a certain low pass kernel are performed. The functions f1, f2, f3, and fn mean convolutions with this kernel. The wavelet planes are computed as the differences between two consecutive approximations Pl1 and Pl, i.e., l= Pl1- Pl. Thus, the curvelet reconstruction formula is given by:
P=

1 + Pn
l =1

n 1

(2)

B. Tilting: Tiling is the process by which the image is divided into overlapping tiles. These tiles are small in dimensions to transform curved lines into small straight lines in the subbands 1 and 2. The tiling improves the ability of the curvelet transform to handle curved edges. C. Ridgelet Transform: The ridgelet transform belongs to the family of discrete transforms employing basis functions. To facilitate its mathematical representation, it can be viewed as a wavelet analysis in the Radon domain. The Radon transform itself is a tool of shape detection. So, the ridgelet transform is primarily a tool of ridge detection or shape detection of the objects in an image. The ridgelet basis function is given by[12]: a,b,(x1, x2)=a-1/2((x1cos+ x2sin-b)/a) for each a>0, each bR and each [0, 2). This function is constant along with lines x1cos+ x2sin = constant. Thus, the ridgelet coefficients of an image f(x1, x2) are represented by:

Rf(a,b,)=

a,b,(x1, x2) f(x1, x2)dx1dx2 (3)

This transform is invertible and the reconstruction formula is given by:


2

f(x1, x2)=


0 0

Rf(a,b,) a,b,(x1, x2)dadbd (4) 4 a

Additive wavelet transform

1 Tilting 2 Tilting Ridgelet transform C2 Ridgelet transform C1

The Radon transform for an object f is the collection of line integrals indexed by (, t) [0, 2) R and is given by:

Rf(,t)=

f(x1, x2)(x1cos+ x2sin-t) dx1dx2 (5)

Fig2: Discrete curvelet transform of an image P 70

Global Journal of Advanced Engineering Technologies, Vol1, Issue-2, 2012

ISSN: 2277-6370

Thus, the ridgelet transform can be represented in terms of the Radon transform as follow:

Rf(a,b,)=

Rf(,t)a-1/2 ((t-b)/a)dt

(6)

Hence, the ridgelet transform is the application of the 1-D wavelet transform to the slices of the Radon transform where the angular variable is constant and t is varying.To make the ridgelet transform discrete, both the Radon transform and the wavelet transform have to be discrete. It is known that different imaging modalities are employed to depict different anatomical morphologies. CT images are mainly employed to visualize dense structures such as bones. So, they give the general shapes of objects and few details. On the other hand, MR images are used to depict the morphology of soft tissues. So, they are rich in details . Since these two modalities are of a complementary nature, our objective is to merge both images to obtain as much information as possible. A curvelet based algorithm is introduced for this purpose. This algorithm is summarized as follows: (1) The MR and the CT images are registered. (2) The curvelet transform steps are performed on both images. (3) The maximum frequency fusion rule is used for the fusion of the ridgelet transforms of the sub bands 1 and 2 of both images. (4) An inverse curvelet transform step is performed on P3 of the MR image and the fused sub bands 1 and 2. These steps are expected to merge the details in both images into a single image with much more details. IV. Image fusion algorithm based on Wavelet and Curvelet Transforms: Images can be fused in three levels, namely pixel level fusion; feature level fusion and decision level fusion .Pixel level fusion is adopted in this paper. We can take operation on pixel directly, and then fused image could be obtained [2]. We can keep as more information as possible from source images. Because Wavelet Transform takes block base to approach the singularity of C2, thus isotropic will be expressed; geometry of singularity is ignored [13,14]. Curvelet Transform takes wedge base to approach the singularity of C2. It has angle directivity compared with Wavelet, and anisotropy will be expressed. When the direction of approachable base matches the geometry of singularity characteristics, Curvelet coefficients will be bigger. First, we need pre-processing, and then cut the same scale from awaiting fused images according to selected region. Subsequently, we divide images into sub-

images which are different scales by Wavelet Transform. Afterwards, local Curvelet Transform of every sub-image should be taken; its sub-blocks are different from each others on account of scales change. The steps of using Curvelet Transform to fuse two images are as follows: Resample and registration of original images, we can correct original images and distortion so that both of them have similar probability distribution.Then Wavelet coefficient of similar component will stay in the same magnitude. Using Wavelet Transform to decompose original images into proper levels. One low-frequency approximate component and three high-frequency detail components will be acquired in each level. Curvelet Transform of individual acquired low frequency approximate component and high frequency detail components from both of images, neighborhood interpolation method is used and the details of gray cant be changed. According to definite standard to fuse images, local area variance is chose to measure definition for low frequency component. First, divide low-frequency coefficients C jo(k1,k2) into individual four square subblocks which are N1 M2 ( 33 or 5 5 ), then calculate Standard deviation(STD) of the current sub-block:
( N1 - 1) / 2

( M1 - 1) / 2

i = ( N1 - 1) / 2 j = ( M1 - 1) / 2

(7)

Here, jo(k1,k2) stands for low-frequency coefficient mean of original images.M1N1 is the size of the subband. Here image fusion can be performed based on the STD that means if any subblock STD is high, this sub block can be preferred as a fused sub block. The average correlation coffecient of a fused image is:

CC=

CP1(k1,k2) CP2(k1,k2)

STDP1>= STDP2 STDP1< STDP2

Where CA(k1,k2), CB(k1,k2) are correlation coefficient values of a image P1 and image P2 respectively. STDP1 and STDP2 are Standard deviations of images P1 and P2 respectively. The root mean square error (Erms) of the fused image with respect to an original image is given by

i =1j =1

(8)

where R(i,j) is original image and F(i,j) is the fusion result. M and N are the dimensions of the images to be fused. The smaller the value of the RMSE, the better 71

Global Journal of Advanced Engineering Technologies, Vol1, Issue 2012 Issue-2,

ISSN: 2277-6370

the performance of the fusion algorithm. The entropy (E) of the fused image is defined as:
( N1 - 1) / 2

DWT DcT

6.7815 7.1209

0.99042 0.9979

0.48285 0.42782

( M1 - 1) / 2

i = ( N1 - 1) / 2 j = ( M1 - 1) / 2

(9)

It quantifies the quantity of information contained in the fused image. A bigger value shows the good fusion results. V. EXPERIMENTAL RESULTS: The proposed algorithm for the fusion of Multi-Focus image Fusion (digital image) and digital Complementary Fusion Image (MR and CT images) MR are tested and compared to the traditional wavelet fusion transform and curvelet transform. Three experiments are conducted for this purpose. For the evaluation of the Performances of the fusion algorithms, the visual quality of the obtained fusion result as well as the quantitative analysis are used. A. Multi-Focus image Fusion We use multi-focus lab images after standard focus testing.fig 3(a) shows left focus right blurred, fig3 s fig3(b) shows right focus left blurred. Two fusion algorithms are adopted in this paper to contrast fusion effects. We separately use Discrete Wavelet Transform ( (DWT), the Discrete Curvelet Transform. Discrete Curvelet Transform (DCT) which is proposed in this paper. DCT) Choosing the fusion operator based the biggest local area variance is used as a fusion standard for high highfrequency sub-band from other scales. Fig. Fig.3(c), (d) separately express corresponding fusion results. Fig. Fig.2 shows that two algorithms all acquire good fusion results, focus difference has been eliminated, definition of original images have been proved. The result of DWT looks worse by contrast; we can see evident faintness in edges. We acquire the best ent subjective effect in DcT. The fused image is the T. clearest, and detail information are kept as more. We adopt Entropy of fused image, correlation coefficient (CCC) and Rms error (Erms) to evaluate the fused quality, it is expressed as table I. In the same group of experiments, if Entropy of fused image is bigger, or correlation coefficient approach one more closer, or Erms is smaller. It shows that the fusion methods adopted is better Table1: Multi-focus image fusion results ocus Fusion methods Values of Multi-focus image focus Entropy Ccc Erms We make simulation experiments by above fusion methods in comparison. The results are expressed as Fig.4. Two algorithms all acquire good fusion results, in which the results of method in this 72

(a)

(b)

(c) (d) Fig.3: Multi-focus image fusion. (a):left focus (a):left-focus; (b): right-focus; (c) fused ima of DWT; (d) fused focus; image image of DCT. B. Complementary Fusion Image In medicine, CT and MRI image both are tomography scanning images. The have different They features. Fig.4(a) shows CT image, in which image brightness related to tissue density, brightness of bones is higher, and some soft tissue can been seen in CT cant images. Fig 4(b) shows MRI image, here image (b) brightness related toa amount of hydrogen atom in tissue, thus brightness of soft tissue is higher, and bones cant be seen. There is complementary information in these images. We use three methods of fusion forenamed in medical images, and adopt the same fusion standards, Fig.4 dards, Fig.4(c), (d) separately shows results, and the data of results is expressed as table II. Table II: Complementary image fusion results Fusion methods DWT DcT Values of Complementary image Entropy Ccc Erms

6.3553
6.8614

0.45921
0.59007

0.28977
0.18119

Global Journal of Advanced Engineering Technologies, Vol1, Issue-2, 2012

ISSN: 2277-6370

paper have more detail information. The date from table II also shows the same conclusion.

(a)

(b)

(c) (d) Fig.4: complementary image fusion. (a)CT image; (b)MRI image; (c) fused image of DWT; (d) fused image of DCT VI. CONCLUSION The paper has presented a new trend in the fusion of digital image, MRI and CT images which is based on the curvelet transform. A comparison study has been made between the traditional wavelet fusion algorithm and the proposed curvelet fusion algorithm. The experimental study shows that the application of the curvelet transform in the fusion of MR and CT images is superior to the application of the traditional wavelet transform. The obtained curvelet fusion results have higher correlation coefficient and entropy values than in wavelet fusion results and minimum values of RMS error than in the wavelet transform. At last, these fusion methods are used in simulation experiments of multi-focus and complementary fusion images. In vision, the fusion algorithm proposed in this paper acquires better fusion result. In objective evaluation criteria, curvelets fusion characteristic are superior to traditional DWT. REFERENCES 1.Chao R., Zhang K., Li Y. J., An image fusion algorithm using wavelet transform, Chinese Journal of Electronics , vol. 32, no. 5, pp. 750-753, 2004. 2.Shen, Y., J. C. Ma, and L. Y. Ma, An adaptive pixel weighted image fusion algorithm based on local priority for CT and MRI images, Proceedings of the IEEE Instrumentation and Measurement Technology Conference, (IMTC), 420422, 2006.

3.Choi, M., R. Y. Kim, and M. G. Kim, The curvelet transform for image fusion, International Society for Photo grammetry and Remote Sensing, ISPRS 2004, Vol. 35, Part B8, 5964, Istanbul,2004. 4. Choi, M., R. Y. Kim, M. R. Nam, and H. O. Kim, Fusion of multispectral and panchromatic satellite images using the curvelet transform, IEEE Geosci. Remote Sensing Lett., Vol. 2, No. 2,136140, Apr. 2005. 5. Shen, Y., J. C. Ma, and L. Y. Ma, An adaptive pixel weighted image fusion algorithm based on local priority for CT and MRI images, Proceedings of the IEEE Instrumentation and Measurement Technology Conference, (IMTC), 420422, 2006. 6.Zhan G.Q.Guo B. L.,Fusion of multisensor images based on the Curvelet transform,Journal of Optoelectronics Laser , vol. 17, no. 9, 1123-1127, 2006. 7.Long G., Xiao L., Chen X.Q., Overview of the applications of Curvelet transform in image processing,Journal of Computer Research and Development, vol.42, no.8, pp.1331-1337,2005. 8. Starck J L, Murtagh F, Cands E J, Donoho D L, Gray and color image contrast enhancement by the Curvelet transform, IEEE Transactions on Image Processing, vol.12, no.6, pp.706-717, 2003. 9.Choi M, Kim R Y, Nam M R, et al., Fusion of Multispectral and Panchromatic Satellite Images Using the Curvelet Transform, IEEE Geoscience and Remote Sensing Letters. Vol. 2, no. 2, pp.136140,2005. 10. Saevarsson, B. B., J. R. Sveinsson, and J. A.Benediktsson,Combined wavelet and curvelet denoising of SAR images,Proceedings of IEEE International Geoscience and Remote Sensing Symposium, (IGARSS), Vol. 6, 42354238, 2004. 11. Huang XishanChen Zhe. A waveletbased scene image fusion algorithm[C]. Proceedings of IEEE TENCON 2002.Piscateway,USA:IEEE Press,2002:602-605 12. Candes, E. and Donoho, D. (1999) Ridgelets: The key to High-Dimensional Intermittency Phil. Trans. R. Soc. Lond. A. 357 (1999), 2495-2509. 13. E. J. Cand`es and D. L. Donoho. New tight frames of curvelets and optimal representations of objects with piecewise-C2 singularities. Comm. on Pure and Appl. Math. 57 (2004), 219266. 14. H. A. Smith. A parametrix construction for wave equations with C1,1 coefficients. Ann. Inst. Fourier (Grenoble) 48 (1998), 797835.

73

Vous aimerez peut-être aussi