Vous êtes sur la page 1sur 25

M.E.

(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

CHAPTER-1
INTRODUCTION

SAOE-Electronics & Telecommunication 2010-11

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

1. INTRODUCTION
With the application of multisensor image in many fields, such as remote sensing, medical imaging, microscope imaging, machine vision, robotics, avionics, surveillance, military affairs, image fusion is becoming an active research area. Image fusion is the process of combing multiple images of the same scene into a single fused image with the aim of preserving the full content value and retaining the important features from each of the original images. The original images are often obtained from different capture techniques or instrument modalities of the same scene or objects. Such data provide wider spectral reliable and complementary information, so the fused image is more suitable for the purposes of object detection, target recognition and human visual perception. In general, based on different extracting extents of the information, image fusion can take place at pixel, feature, or decision level. With rapid advancements in technology, it is now possible to obtain information from multisource images. However, all the physical and geometrical information required for detailed assessment might not be available by analyzing the images separately. In multisensory images, there is often a trade-off between spatial and spectral resolutions resulting in information loss. Image fusion combines perfectly registered images from multiple sources to produce a high quality fused image with spatial and spectral information. It integrates complementary information from various modalities based on specific rules to give a better visual picture of a scenario, suitable for processing. The most common approach of image fusion, known as pixel-based fusion, consists of comparing information among the pixels in the same location or pixels in the same region in different images. So far, pixel-based fusion has attracted much attention and many interrelated methods have been proposed such as weighted means and multiresolution analysis. Feature based fusion can be achieved by the region-based fusion framework and more intelligent rules are applied depending on different region features. Wavelet transform is a signal analysis method similar to image pyramids is the discrete wavelet transform. Wavelet transforms have been successfully used in many fusion schemes. A common wavelet analysis technique used for fusion is the discrete wavelet transform (DWT). An image can be represented either by its original spatial representation or in frequency domain. By Heisenbergs uncertainty, information cannot be compact in both spatial and frequency domains simultaneously. It motivates the use of wavelet transform which provides a multiresolution solution based on time-scale analysis. Each subband is processed at a different resolution, capturing localized time-frequency data of image to provide unique directional information useful for image representation and feature extraction across different scales. Several approaches have been proposed for wavelet based image fusion which is either pixel or region based. In order to represent salient features more clearly and enrich the information content in multisensory fusion, region based methods involving segmentation and energy based fusion were introduced. Other fusion methods are based on saliency measurement, local gradient and edge fusion. Pixel based algorithms concentrate on increasing image contrast whereas region based algorithms provide edge enhancement and feature extraction. A few attempts have been made to combine these algorithms in a single fused image. The integration of image fusion algorithms offers immense potential for future research as each rule emphasizes on different characteristics of the source image. This paper proposes hybrid architecture (algorithm) for wavelet based image fusion combining the principles of pixel and region based rules. There are several situations in which we would want to use applications developed in MATLAB for commercial and research activities

SAOE-Electronics & Telecommunication 2010-11

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

CHAPTER-2
RELEVANCE

SAOE-Electronics & Telecommunication 2010-11

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

2. RELEVANCE:
Image fusion is a powerful technique in the fields of image analysis and computer vision. The fused image is formed to improve the visibility and the resolution of the original image, and to emphasize the feature information of analyzing objects. The resulting fused image can be used in many tasks such as image Segmentation, feature extraction and object recognition. It can also reduce errors in detection and recognition of objects by incorporating complementary information from several source images. The process of image fusion combines images acquired from different modalities or different types of sensors simultaneously viewing the same scene. To date, the image fusion is successfully applied in many real applications, such as in medical diagnosis, remote sensing, multi-focus CCD and military situational awareness. Multi-resolution approach in wavelet is well suited to manage the different image resolutions. Many research works have studied on multi-resolution representation of signals and have established that Multi-resolution information for a number of image processing applications including the image fusion. Wavelet coefficients coming from different images can be appropriately combined to obtain new coefficients, so that the information in source images is collected appropriately. The discrete wavelet transform (DWT) allows the image decomposition in different kinds of coefficients preserving the image information. A wavelet-based image fusion method is therefore required to identify the most important information in the input images and to transfer it into the fused image. Previous researches had considered relatively simple methods for combing the wavelet coefficients, such as weight, maximum selection or linear/nonlinear analysis. Hong Zhang, Lei Liu and Nan Lin put forward a new medical image fusion method based on analyzing wavelet coefficient and the corresponding energy. Huaixin Chen proposed a multi-resolution fusion method based on principle component analysis. Nikolaos Mitianoudis and Tania Stathaki use independent component analysis to develop a different approach. Recently, Zhang Yingjie and Ge Liling proposed a fusion method by using regions in sources images. The images are initially segmented in some way to produce a set of regions. Various properties of these regions can be calculated and applied to determine which features from which images are included in the fused image. But well initial segmentation is different since there is not enough information for a single source image.

SAOE-Electronics & Telecommunication 2010-11

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

CHAPTER-3
LITERATURE SURVEY

SAOE-Electronics & Telecommunication 2010-11

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

3.1

WAVELET BASED IMAGE FUSION:

A] Wavelet theory: Wavelet transform is a multiresolution analysis that represents image variations at different scales. A Wavelet is an oscillating and attenuated function and its integrals equal to zero. The computation of wavelet transform of a 2D image involves recursive filtering and sub-sampling. At each level, there are three detail images. We denote these detail images as LH (containing horizontal information in high frequency), HL (containing vertical information in high frequency), and HH (containing diagonal information in high frequency). The decomposition also produces one approximation image, denoted by LL, which contains the low frequency information. The wavelet transform can decompose the LL band recursively. The need for Image Fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. The objective of image fusion is to combine information from multiple images of the same scene. The result of image of fusion is a new image which is more suitable for human and machine perception or further image-processing tasks such as segmentation, feature extraction and object recognition. Image fusion is the process by which two or more images are combined into a single image retaining the important features from each of the original images. Image fusion is the process by which two or more images are combined into a single image retaining the important features from each of the original images. The fusion process must satisfy the following requirements1) Preserve all relevant information in the fused images, 2) Suppress irrelevant parts of the image and noise; 3) Minimize any artifacts or inconsistencies in the fused image.

The fusion process is summarized in the following-

Fig.1- Block Diagram of Basic Image Fusion Process In the first step the input images are decomposed into their multiscale edge representation, using either any image pyramid or any wavelet transform. The actual fusion process takes place in the difference resp. wavelet domain, where the fused multiscale representation is built by a pixel-by-pixel selection of the coefficients with maximum magnitude. Finally the fused image is computed by an application of the appropriate reconstruction scheme. The most common form of transform image fusion is wavelet transform fusion as shown in fig. 2. ( ) ( ) are source images. Taking wavelet transform (W) of these images wavelet coefficient and fused wavelet coefficients are obtained. Then by taking inverse wavelet transform ( ) fused image is obtained. In common with all transform domain fusion techniques the transformed images are combined in the transform domain using a defined fusion rule then transformed back to the spatial domain to give the resulting fused image. Wavelet transform fusion is more formally defined by considering the wavelet transforms w of the two registered input images and together with the fusion rule . Then, the inverse wavelet transform is computed, and the fused image is reconstructed: ( ) ( . ( ( )) ( ( ))/) ( )

SAOE-Electronics & Telecommunication 2010-11

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

Fig. 2- Fusion of the wavelet transforms of two images Wavelets are finite duration oscillatory functions with zero average value. The irregularity and good localization properties make them better basis for analysis of signals with discontinuities. Wavelets can be described by using two functions viz. the scaling function f (t), also known as father wavelet and the wavelet function or mother wavelet. Mother wavelet (t) undergoes translation and scaling operations to give self similar wavelet families as follows, ( ) ) . / ( ( )

Where a is the scale parameter and b the translation parameter.

Fig.3- An example of the Image Fusion process Practical implementation of wavelet transforms requires discretisation of its translation and scale parameters by taking, ( ) Thus the wavelet family can be defined as,

SAOE-Electronics & Telecommunication 2010-11

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

( ) ( ) ( ) If discretisation is on a dyadic grid with a0 = 2 and b0=1 it is called standard DWT. Wavelet transformation involves constant Q filtering and subsequent Nyquist sampling as given by Fig.4 [9]. Orthogonal, regular filter bank when iterated infinitely gives orthogonal wavelet bases [14]. The scaling function is treated as a low pass filter and the mother wavelets high pass filter in DWT implementation.

Fig4: Two-dimensional subband coding algorithm for DWT The source image is decomposed in rows and columns by low-pass (L) and high-pass (H) filtering and subsequent down sampling at each level to get approximation (LL) and detail (LH, HL and HH) coefficients. Scaling function is associated with smooth filters or low pass filters and wavelet function with high-pass filtering. B] Fusion algorithm: The advent of multiresolution wavelet transforms gave rise to wide developments in image fusion research. Several methods were proposed for various applications utilizing the directionality, orthogonality and compactness of wavelets . Fusion process should conserve all important analysis information in the image and should not introduce any artifacts or inconsistencies while suppressing the undesirable characteristics like noise and other irrelevant details.

Fig. 5 Wavelet based image fusion: The source images A and B are decomposed into discrete wavelet decomposition coefficients: LL (approximations), LH, HL and HH (details) at each level before fusion rules are applied. The decision map is formulated based on the fusion rules. The resulting fused transform is reconstructed to fused image by inverse wavelet transformation Fusion can be performed on pixel, feature or decision level. The complexity of pixel based algorithms is lesser than other methods. They are used in applications where both pixel spacing and spectral properties of source images are same or similar. The advent of region based image fusion can be attributed to the inefficiencies faced by pixel algorithms in cases where the salient features in images are larger than one pixel. Region based rules are more complicated than simple pixel algorithms and used when pixel spacing of images are different. Decomposition coefficients are segmented into small regions and activity measure along each region is computed. Coefficients with maximum activity level are preserved, retaining the salient features. Popular methods include computation of variances of small regions in the image or energy based salience measurement.

SAOE-Electronics & Telecommunication 2010-11

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

C) IMAGE FUSION METHODS:

1) Hybrid architecture based on wavelet transform 2) Wavelet based PCA method 3) Adaptive fusion method 4) Fusion method with different Integration scheme

SAOE-Electronics & Telecommunication 2010-11

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

CHAPTER-4
PROPOSED WORK

SAOE-Electronics & Telecommunication 2010-11

10

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

4. PROPOSED WORK:
In our project we are going to implement four image fusion methods mentioned above. Then comparing the results of these methods we will decide which method is best suited for image fusion. Now we will see these four methods of image fusion. 1) Hybrid architecture based on wavelet transform: This method proposes a hybrid fusion method which integrates both pixel based rules and region based rules using mask in a single fused image. Pixel based rules operate on individual pixels in the image, but does not take into account some important details like edges, boundaries and salient features larger than a single pixel. Use of region based method may reduce the contrast in some images and does not always succeed in effectively removing ringing artifacts and noise in source images. The inadequacies of these two types of fusion rules point to the importance of developing a hybrid algorithm based architecture combining the advantages of both. Hybrid architecture in Fig. 6 uses different rules for fusing low and high frequency sub images of wavelet decomposition. Test images are decomposed using discrete wavelet transform. The approximations are subjected to pixel based maximum selection rule. A 3X3 square mask and odd order rectangular averaging mask (5X7) are each applied to detail images. The 5X7 averaging filter mask gives a better performance with less noise when compared to a square maskThe new sets of coefficients from each source image are added to get new approximations and details. Final fused coefficient matrix is obtained by concatenation of new approximations and details A pixel based maximum selection algorithm is used for approximations while square and averaging filter masks are applied to detail coefficients. High pass square filter mask helps in enhancing the salient features edges. Averaging filter mask removes noise by taking the mean of the gray values of the window surrounding the centre pixel.

Fig.6 Mask based image fusion

SAOE-Electronics & Telecommunication 2010-11

11

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

Implementation: The algorithm for hybrid fusion rule can be divided into three different stages with reference to Fig.6. 1) Read the two source images to be fused. Then perform independent wavelet decomposition of the two images until level L to get approximation and Detail ( , ) coefficients for l =1, 2... L. 2) Select pixel based algorithm for approximations (LL) which involves fusion based on taking the maximum valued pixels from approximations of source images A and B. ( ) ( ) Here ( is the fused and and are the input approximation coefficient of image A and is of image B. A binary decision map is formulated based on the maximum valued pixels between the approximations. The decision rule D for fusion of approximation coefficients in the two source images A and B is thus given by, ( ) ( ) ( ) ( ) = 0, Otherwise A small window of size 3X3 or 5X7 is selected from the detail subbands based on whether the type of filter mask used is square or rectangular. Perform region level fusion of details by applying 3X3 square and 5X7 averaging filter mask to detail coefficients. The resultant coefficients are added from each subband. ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) (8) , are vertical high frequencies, are horizontal high frequencies, are diagonal high frequencies [15] of the fused and input detail subbands respectively are as mentioned above. 3) We obtain the final fused transform Corresponding to approximations through pixel rules and the vertical, horizontal and diagonal details , , by mask based fusion where, level of decomposition, l=1,2,..., L. The new coefficient matrix is obtained by concatenating fused approximations and details. Fused image is reconstructed using inverse wavelet transform and displayed.

2)

Wavelet based PCA method:

Traditional PCA image fusion consists of four steps: (i) Geometric registration is formatting that size of low-resolution multi-spectral images is the same as the high resolution image. (ii) Transforming low-resolution multi-spectral images to the principal component images by PCA transformation. (iii) Replacing the first principal component image with the high-resolution image that is stretched to have approximately the similar variance and mean as the first principal component image. (iv)The results of the stretched PAN data replace the first principal component image before the data are back transformed into the original space by PCA inverse transformation. We replace the first principal component image with stretched PAN data because the first principle component image has the common information to all the bands. The traditional PCA fusion methods may not be satisfactory to fuse high-resolution images and low-resolution multi-spectral images because the methods may distort the spectral Characteristics of the multi-spectral data. Here, we combine the standard PCA image fusion and the wavelet-based image fusion to propose an image fusion approach called wavelet-based PCA image fusion that improves the traditional PCA image fusion as shown in Fig. We use the wavelet-based PCA method to fuse the low-resolution Landsat TM images and the high-resolution SPOT PAN images. This method includes seven steps: geometric registration, PCA transformation, histogram matching, wavelet decomposition, fusion, wavelet reconstruction, and PCA inverse transformation.

SAOE-Electronics & Telecommunication 2010-11

12

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

Fig.7- The flow chart of the wavelet-based PCA method A. Geometric registration: We use a 3 by 3 weighted mask to enlarge the Landsat TM images such that the size is the same as the SPOT PAN images. B. PCA transformation: The PCA is a mathematical transformation that generates new images through the linear combinations of the components of the original images. The transformation generates a new set of orthogonal axes. The new images are represented by these axes and then the components are independent. In this study, we transform six original Landsat TM bands (1, 2, 3, 4, 5, and 7) to six principal component images by the equation:

SAOE-Electronics & Telecommunication 2010-11

13

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

(9) where A is a matrix whose row are formed by the eigenvectors of the covariance matrix of the six Landsat TM images. These eigenvectors in matrix A is ordered so that the first row of A is the eigenvector corresponding to the largest eigenvalue, and the last row is the eigenvector corresponding to the smallest eigenvalue. mk, k = 1, 2, . . ., 6, are the means of the Landsat TM six bands. xk j , k = 1, 2, . . ., 6, j = 1, . . ., NL, where NL is the size of the Landsat TM image, are the gray values of the six original Landsat TM images. * | + is called the k-th principal component image. C. Histogram matching: Histogram match is used to specify the spectral distribution of the high-resolution image to the same as the low-resolution multi-spectral images. In this study, we perform conventional histogram matching between the PAN image and the first principal component image. This method includes four steps: Step 1- Linearly stretch the range of the first principal component image to [0, 255]. Step 2- Calculate the cumulative distribution function (cdf) of the first principal component image and the PAN image. Step 3- Adjust the cdf of the PAN image to approximate the cdf of the first principal component image. Step 4- Recover range of the PAN and the first principal component image to the original range of the first principal component image. D. Wavelet decomposition: We following use equation to decompose the specified PAN image to S (content image) and D (details images) components. ( ( And (10) ) ) The result of the MWD decomposition is represented by, (11) where n is number of details image. E. Fusion: We replace S component, content image of the specified PAN image, by the first principal component image that has the same size as S image. F. Wavelet reconstruction: We use Eq. (10) to reconstruct Y1, the first principal component image of the multi-spectral images, and D, the details images of the specified PAN image, to the fused image Fnew by the equation (12) where n is number of the details image. The process of integrating the wavelet decomposition, fusion, and wavelet reconstruction is called the wavelet based image fusion that replaces the content image of the high-resolution image with the low-resolution multi-spectral image. G. PCA inverse transformation: We use the equation

(13)

SAOE-Electronics & Telecommunication 2010-11

14

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

to back transform the fused image and the other component images into the original space. In the above eq. is an inverse matrix of the A that is in Eq. (9); mk, k = 1, 2, . , 6, are means of the six original Landsat TM images; yrj, r = 2, 3. . , 6, j = 1. . , where is size of the SPOT PAN image, are values of the other principal component images; Fnew, j=1,, , is value of the fused image. The * | + is called the high-resolution multi-spectral image.

3) Adaptive fusion method:


An adaptive fusion method of multi-focused image based on wavelet transform is presented. The match degree of source image is considered as an adaptive threshold to decide whether maximum selection or weighted average to be used for new wavelet coefficients of sub-images. Burts Fusion Method: Burt method combines maximum selection with weighted average to fuse source images according to the match degree.The match degree is calculated by following equation: ( )
( ( ) ) ( ( ) )

(14)

Where ( ) and ( ) are the sub-images region features of row i and column j in k level and d ( ) is lower than a fixed threshold T, Burt chooses the wavelet frequency component. If match degree coefficients of bigger region feature to be the fusion coefficients } (15) ( ) ( ) ( ) Where and are the wavelet coefficients of the sub-image of source image A and B in k level and d frequency component respectively and is the fusion coefficient. Otherwise, the fusion coefficients are chosen as: ( ) ( ) ( ) { Where and ( ) ( ) ( ) ( ( ) ( ) ) (16) ( ) { ( ) ( ) ( )

are weights which are given as:


( )

= 1-

(17)

Adaptive fusion method procedure: Burts method does consider images uncertainty, but it is lack of flexibility due to the fixed threshold. In proposed method, the match degree of source image is considered as an adaptive threshold which decides whether the maximum selection or weighted average to be used for the fusion coefficients. The process of the proposed method is as following: Firstly, the region features of source image A and B are obtained according to local gradient for instance. Then the difference of them is computed: ( ) ( ) ( ) (18) Here, we used the source images region features instead of sub-images region features. The reason is that floating point number is generated after wavelet decomposition and may exceed the computers limited word length. That often makes image restructure insufficiently and leads to clarity reduction of fusion image. Secondly, the value range of DF(i, j) is divided into three intervals: ( ( )) , ( ) ( )- ( ( ) ) (19) Where the match degree M (i, j) of source image is defined as:

SAOE-Electronics & Telecommunication 2010-11

15

M.E.(E & Tc)


( ( ) ) ( ( ) )

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

(20)

Thirdly, the wavelet coefficients of sub-images are obtained by wavelet transform. Now, different method is selected for fusion coefficients in different interval according to the value of ( ). If the value of ( ) is out of the interval of [-M (i, j), M (i, j)], the fusion coefficients are selected as: ( ) ( ) ( ( ) ) ( ) (21) ( ) ( ) ( ( )) { Since two images have a certain similarity in the interval of [-M (i, j), M (i, j)], discarding any wavelet coefficient of the images will lead to an information reduction. Therefore, weighted average is used here to calculate the fusion coefficient. Since the value of match degree is in [0, 1], the fusion coefficient is given when M (i, j) is in the interval of [0, 0.5]: ( ) ( ) ( ( )) ( ) ( ) ( ) ( ) (22) ( ) ( ) ( ( )) ( ) ( ) ( ) { Otherwise, when M(i, j) is in the interval of (0.5,1], the fusion coefficient is calculated as: ( ( ) { ( ) ) ( ( ) ( ( ) ) ( ( ) ( ( ( ( )) ) )) ) ( ( ) ) (23)

It is apparent that the fusion coefficient of (22) is opposite to the one of (23). This helps the wavelet coefficient of greater region feature giving more contribution in the fusion.

4) Fusion method with different Integration scheme:


This method presents a wavelet based image fusion method that integrates the decomposed coefficients in a novel separate way during the fusion process. The method is developed by defining a new window-based fusion scheme that can effectively obtain the discrete wavelet transform (DWT) coefficients of the fused image from source images. In this scheme, the coefficients of the low frequency bands are performed by a maximal variance based strategy, while the coefficients of the high frequency bands are performed by a maximal energy of image gradient strategy. The performance of the proposed fusion method has been compared with three existing methods using a number of Synthetic and real test images. Procedure of the proposed method: In this section, in order to better understand the concept and procedure of our fusion technique, the flowchart of the proposed fusion method is shown in Fig.8. The main idea of our method is to perform DWT decomposition on each source image; the coefficients are then performed with a certain fusion rule as displayed in the middle block of Fig.2. In this paper, by considering the physical meanings of the LF and HF coefficients, we proposed a novel fusion scheme that treats the coefficients of the LF and HF bands separately: the former are performed by a maximal variance based scheme, and the latter is performed by a maximal energy of image gradient (EIG) based scheme. Finally, the fused image is obtained by performing IDWT on the combined wavelet coefficients. Here, it should be noted that the HF bands include the vertical, horizontal, and diagonal high frequencies of the image, respectively. Therefore, the fusion process must be performed in all these domains

SAOE-Electronics & Telecommunication 2010-11

16

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

Fig.8- Flowchart of the proposed fusion method To simplify the description of the different alternatives available in forming a fusion rule, here we also consider only two source images, X and Y, and the fused image Z. The method can of course be easily extended to more than two images. Generally, an image I have its multiscale decomposition (MSD) representation denoted as . Hence we will

m, n, k, l) indicate the index corresponding to a particular MSD coefficient, where m and n indicate the spatial position in a given frequency band, k is the decomposition level, and l is the frequency ( ) denote the MSD value of the corresponding coefficient. band of the MSD representation. Therefore,
encounter , and . Let

p=(

LF fusion algorithm:
The LF band is the original image at the coarser resolution level, which can be considered as a smoothed and subsampled version of the original image. Most information of their source images is kept in the LF band, such as the mean intensity and texture information. As a result, coefficients in this band with high magnitudes do not necessarily correspond with salient features. In this case, the aforementioned MS scheme may not work well for these coefficients. On the other hand, as we know that the human visual interest usually is concentrated on the detection of changes in contrast between regions on the edges separate these regions. Thus, a good method for this band must produce large coefficients on those edges. Base on this analysis, in this paper we propose a variance based method for selection the coefficients in the LF band because the variance measure not only can describe the texture information of the image but also can effectively denote the changes on the contrast or edges. Therefore, the fusion scheme of the LF band can be formulated as the following equations: ( ) ( ( ) ( )) (24)

( ) ( )

( ) ( )

( ( ) ( )

(25)

( ) ( )

(26)

( ), ( ) denote the mean value and variance value of the coefficients where S T is the neighboring size, centered at (m,n) in the window of S T respectively.

HF fusion algorithm:
The HF bands contain the detail coefficients of an image, which usually have large absolute values correspond to sharp intensity changes and preserve salient information in the image. Based on the requirements of image fusion, we know that the purpose of image fusion requires that the fused image must not discard any useful information contained in the source images and effectively preserve the details. Therefore, it is important to find appropriate scheme to merge the details of the input images. So, if the scheme mentioned above is adopted here, the fused results will be blocked. The HF

SAOE-Electronics & Telecommunication 2010-11

17

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

Information is mainly the contour structure information as well as detail information. Therefore, in order to better improve the quality of the fused result, we put forward an energy of image gradient (EIG) based scheme for selection the coefficients in HF bands. For each detailed subband, a window based local energy of image gradient is defined as: ( ) (
)

( (

))

( (

))

(27)

where w is the windows size of the image, and LEIG denotes the local energy of image gradient. After obtaining the LEIG of each block in the HF bands, the coefficients are then selected by: ( ) ( ) ( ) (28) ( ) Once all the coefficients are achieved from the above two procedures, an inverse wavelet transform (IDWT) is then performed on them; the fused image is thus constructed. ( ) {

SAOE-Electronics & Telecommunication 2010-11

18

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

CHAPTER-5
EXPERIMENT AND ANALYSIS

SAOE-Electronics & Telecommunication 2010-11

19

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

5.

Experimental Analysis and results:

5.1 Hybrid architecture based on wavelet transform:


The source images used for Fusion Experiment are coffee cup, bike, flower, bride each with one having left portion focused and other having right portion focused with size 256X256 are shown in [I] . Objective performance evaluation is done by taking Mean Square Error (MSE) and Peak Signal-to-Noise Ratio (PSNR) with help of following equations. , ( ) ( )( ) ( ) ( )

where S is the source image and F is the fused image, M & N are no. of rows and columns in image pixel. Below table i.e. part [II] consolidates the results obtained. The hybrid fusion rule gives least values for MSE and the highest value of PSNR for all test cases. The 5X7 averaging filter mask gives a better performance with less noise when compared to a square mask, in all test cases as evident from Table. The optimum choice of filter mask gives maximum benefits from this fusion rule. Depending on the application, low pass or high pass filters can be used as masks and it provides the flexibility of using the same algorithm with different masks for fusion. Here first level decomposition is done.

5.2. The source images used for Fusion Experiment:


SOURCE IMAGE SOURCE IMAGE 1 SOURCE IMAGE 2 FUSED IMAGE

SAOE-Electronics & Telecommunication 2010-11

20

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

5.3. COMPARISON BETWEEN PIXEL, REGION AND HYBRID FUSION RULE BASED IS AS SHOWN BELOW IN THE TABLE:

SOURCE IMAGES

PSNR USING PIXEL-BASE FUSION RULE 34.0023

MASK Harr 3b3 5b5 5b7 3b3 5b5 5b7 3b3 5b5 5b7 3b3 5b5 5b7
26.0410 26.0623 26.0446 11.7567 11.7504 11.7470 19.9740 19.9803 19.9785 19.9165 20.0097 19.9459

PSNR USING WAVELETS Db4


26.4371 26.5016 26.5073 11.6756 11.6708 11.6697 20.3177 20.4489 20.4487 18.5813 18.6424 18.6296

Bior1.3
25.8676 25.8378 25.8185 11.6709 11.6528 11.6483 19.8633 19.8397 19.8403 19.4648 19.2681 19.2890

Coif1
26.4795 26.5203 26.5283 11.7048 11.6986 11.6971 20.6532 20.6934 20.6975 19.1756 19.1731 19.1843

Symlet
26.3641 26.4284 26.4404 11.7013 11.6985 11.6979 20.0887 20.2671 20.2700 19.5546 19.6724 19.6439

COFFEE CUP BIKE

17.8685

FLOWER

26.8961

BRIDE

27.5869

Papers published and accepted:


The paper THE HYBRID ARCHITECTURE FOR IMAGE FUSION BASED ON WAVELET TRANSFORM is published in International journal for Advance Engineering Technology (IJAET). This same paper is considered for a conference PSG TECH - ICMCM'11.

SAOE-Electronics & Telecommunication 2010-11

21

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

CHAPTER-6
CONCLUSION

SAOE-Electronics & Telecommunication 2010-11

22

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

6. CONCLUSION:
Here we saw the concept of wavelet transform and wavelet based image fusion with four different methods. In Hybrid method we decompose the images using wavelet transform. Then Pixel based maximum selection rule is used for approximate part while region based rules are used for detailed part. Then the new fused coefficients are obtained. At the end some concatenation and reconstruction scheme is performed using inverse wavelet transform. The hybrid architecture presented here gives promising results in all test cases and can be further extended to all types of images by using different averaging, high-pass and low-pass filter masks. The variations in performance of fusion rules for different test images show that the choice of an optimum fusion rule depends mainly on the type of images to be fused, degradation models used to introduce noise in source images and the application. Hence using hybrid architecture we can reconstruct sample images with plenty of information as compared to the traditional algorithms. In Wavelet based PCA method there will be following steps: Geometric registration, PCA transformation, Histogram matching, Wavelet decomposition, Fusion, Wavelet reconstruction, PCA inverse transformation. In adaptive fusion method, the match degree of source image is considered as an adaptive threshold which decides whether the maximum selection or weighted average to be used for the fusion coefficients. In fusion with different integration scheme the coefficients of the low frequency bands are performed by a maximal variance based strategy, while the coefficients of the high frequency bands are performed by a maximal energy of image gradient strategy. Objective performance evaluation is done by taking Mean Square Error (MSE) and Peak Signal-to-Noise Ratio (PSNR) with help of following equations.

SAOE-Electronics & Telecommunication 2010-11

23

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

CHAPTER-7
REFERENCES

SAOE-Electronics & Telecommunication 2010-11

24

M.E.(E & Tc)

COMPARISION OF DIFFERENT IMAGE FUSION METHODS BASED ON WAVELET TRANSFORM

7.

REFERENCES:

[1] Susmitha Vekkot, and Pancham Shukla, A Novel Architecture for Wavelet based Image Fusion, World Academy of Science, Engineering and Technology 57 2009 [2] Din-Chang Tseng, Yi-Ling Chen, and Michael S. C. Liu, Wavelet-based Multispectral Image Fusion 0-78037031-7/01/ 2001 IEEE [3] Ting Zhou, Binjie Hu, adaptive Fusion Method of Multi-focused Image Based on Wavelet Transfom, 978-14244-3709-2/10/2010 IEEE [4] Yong Yang, Performing Wavelet Based Image Fusion through Different Integration Schemes, International Journal of Digital Content Technology and its Applications. Volume 5, Number 3, March 2011 [5] M. Sasikala and N. Kumaravel, A comparative analysis of featurebased image fusion methods, Information Technology Journal, 6(8):1224- 1230, 2007. [6] J. Daugman and C. Downing, Gabor wavelets for statistical pattern recognition, The handbook of brain theory and neural networks, M. A. Arbib, ed. Cambridge, MA, USA: MIT Press, 1998, pp.414-420. [7] S. Mallat, Wavelets for a vision, Proceedings of the IEEE, New York Univ., NY, 84(4):604-614, April 1996. [8] A. Wang, H. Sun and Y. Guan, The application of wavelet transform to multimodality medical image fusion, Proc. IEEE International Conference on Networking, Sensing and Control (ICNSC), Ft. Lauderdale, Florida, 2006, pp.270-274. [9] O. Rockinger, Pixel-level fusion of image sequences using wavelet frames, Proc. of the 16th Leeds Applied Shape Research Workshop, Leeds University Press, 1996, 149-154. [10] H. Li, B. S. Manjunath, and S. K. Mitra, Multisensor image fusion using the wavelet transform, Graphical Models and Image Processing,57(3):235-245, May 1995 [11] M. Jian, J. Dong and Y. Zhang, Image fusion based on wavelet transform, Proc., 8th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Distributed Computing,,Qingdao, China, July 2007. [12] Z. Yingjie and G. Liling, Region-based image fusion approach using iterative algorithm, Proc. Seventh IEEE/ACIS International Conference on Computer and Information Science(ICIS), Oregon, USA, May 2008. [13] H. Zhang, L. Liu and N. Lin, A novel wavelet medical image fusion method, International Conference on Multimedia and Ubiquitous Engineering (MUE07), Seoul, Korea, April 2007. [14] V. Petrovic, Multilevel image fusion, Proceedings of SPIE, 5099:87- 96, 2003. [15] Y. Zheng, X. Hou, T. Bian and Z. Qin, Effective image fusion rules of multiscale image decomposition, Proc. of 5th International Symposium on Image and Signal Processing and Analysis (ISPA07), Istanbul, Turkey, September 2007, pp. 362-366. [16] J. Gao, Z. Liu and T. Ren, A new image fusion scheme based on wavelet transform, Proc., 3rd International Conference on Innovative Computing,Information and Control, Dalian, China, June 2008. [17] I. Daubechies, The wavelet transform, time-frequency localization and signal analysis, IEEE Trans. Info. Theory, 36:961-1005, 1990. [18] M. Vetterli and C.Herley, Wavelets and filter banks: theory and design, IEEE Transactions on Signal Processing, 40(9):2207-2232, September 1992. [19] S. G. Mallat, A Theory for multiresolution signal decomposition the wavelet representation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(7):674-693, July 1989. [20] R. C. Luo and M. G. Kay, Data fusion and sensor integration: state of the art 1990s, Data Fusion in Robotics and Machine Intelligence, M. A. Abidi and R. C. Gonzalez eds., Academic Press, San Diego, 1992, pp.7- 135. [21] Y. Du, P. W. Vachon, and J. J. V. Sanden, Satellite image fusion with multiscale wavelet analysis for marine applications: preserving spatial information and minimizing artifacts (PSIMA), Can. J. Remote Sensing, 29(6):1423, November 2003. [22] S. T. Smith, MATLAB advanced GUI development, Dog Ear Publishing, 2006. [23] O. Rockinger, Various Registered Images, Available Online, URL:http://www.imagefusion.org/, 2005

SAOE-Electronics & Telecommunication 2010-11

25

Vous aimerez peut-être aussi