Vous êtes sur la page 1sur 6

Introduction

With advancement in technology, image fusion has been extensively employed in various
applications such as medical diagnosis, surveillance, robotics,biometrics, remote sensing,
computer vision etc., so as to improve the visual analysis of an image. The primary objective of
an image fusion in any application is to combine all the notable visual information from multiple
input images into a single image by preserving the more comprehensive, steady and accurate
information present in individual input images, thus making the fused image easy for human or
machine perception and also minimizing the storage requirements.Image registration and image
resampling are the pre-processing steps in any image fusion. Image registration converts various
sets of information into one equivalent structure while image resampling is the method of
changing pixel dimension of an image because the fusion images should have identical pixel
dimensions.
In medical imaging, CT scan image provides details about dense structures like bones while the
MRI image shows brain tissue anatomy. If we can integrate these medical images obtained from
different imaging modalities into single image with the attributes of both source images, we can
have both functional and anatomical information in a single image which makes easy for a
surgeon in planning proper surgical procedure. The method of integrating attributes from
different imaging modalities into a single image with attributes of both source images is known
as multi-modal image fusion. Similarly, due to limitations in depth of focus of an optical system,
a single image of a complex scene cannot usually represent all the regions of interest effectively.
This gives rise to multi-focus image fusion, which combines the series of images that are
produced by gradually shifting the focal plane through the image scenery such that resultant
image contains all the objects of complex scene focused. Likewise, an IR sensor produces an
image in IR spectrum while Visible sensor like CCD generates images in visible spectrum.
Multispectral image fusion can be employed to integrate complementary information obtained
from different frequency band sensors of same area so that resultant image can be helpful in
interpreting and analyzing the data more precisely.

Fusion of images can be carried out either in spatial domain or in frequency domain. Fusion in
spatial domain involves manipulation of the features and intensity values of the pixels to get the
desired fusion result. Retaining the pixels originality is an advantage with spatial domain
techniques which helps in representing the image shape more clearly. Techniques like averaging,
maximum selection, Principle Component analysis(PCA),Brovey, bilateral filtering, guided
filtering falls under this category.Frequency domain fusion involves transforming the source
images into frequency domain using forward transformation,fusing the frequency coeffiecnts of
source images using appropriate fusion mechanism and obtaining the fused image in spatial
domain by using inverse transformation. Wavelet transformation, curvelet,contourlet, Non-
Subsampled Contourlet,shearlet transformation are the most familiar transform domain
techniques employed in the applications of fusion.More computational complexity and
degradation of contrast of the resultant image are the drawbacks of the frequency domain
methods.
In this study, a spatial domain technique is employed for image fusion which makes use of Cross
Bilateral filter for source image decomposition and employs novel saliency weight maps for
combining details layers while combining the coarse layers using non-linear average for
attaining more informative fused image. Rest of this manuscript is organized as follows: Section
2 deals with literature review and preliminary concepts while proposed methodology is discussed
in Section 3 and finally Section 4 is presented with result analysis followed by conclusions in
Section 5.
This Section focus on a review of existing literature and preliminary concepts of Cross Bilateral filter. A
considerable amount of research has been carried out over last three decades in the areas of
image fusion. Carper et al. [ ] described Multispectral image fusion based on HIS transformation.
Toet et al. [ ] proposed an hierarchical image fusion based on Laplacian pyramids. The fusion
strategies based on Principal Component Analysis (PCA), Independent Component Analysis
(ICA) , Brovey transform and Intensity-Hue-Saturation (IHS) suffers spectral degradation
[gomathi]. Pyramidal schemes like Laplacian Pyramid [ ], Gradient Pyramid [ ], Ratio Pyramid
[ ], Contrast Pyramid [ ] produces blocking effects. Naidu et al. [ ] proposed multi-focus image
fusion based on multiscale singular value decomposition. Li et al. [ ] proposed a conventional
discrete wavelet based image fusion. However, it suffers with shift-variant problem, sensitive to
noise and degrades the contrast of resultant image, which made Rockinger [ ] to make use of a
wavelet transform with shift-invariant feature. Yang et al.[ ] proposed fusion method based on
multi scale, multidirectional contourlet transformation to overcome the drawbacks of limited
directionality of wavelets. Bhatnagar et al. [ ] employed Non-Subsampled Contourlet Transform
for medical image fusion where low frequency bands are fused using Shannon entropy and high
frequency bands are fused using directive contrast information. Gomathi et al.[ ]also used Non-
Subsampled Contourlet Transform for multimodal image fusion. Here the approximation sub-
bands are fused based on mean where as detail sub-bands are combined based on variance.
Vikrant Bhateja et al.[ ] has developed a two stage multimodal fusion framework utilizing
cascaded combo of Stationary Wavelet Transform (SWT) and Non-Sub-sampled Contour let
Transform (NSCT). Shutao Li et al. [ ] used guided filter based weighted average approach for
fusion of input images. It employs Gaussian filter for two scale image decomposition and guided
filter to generate optimized weight maps for base detail layers of source images which are then
fused using weighted average approach. Shutao Li et al.[ ] employed direction filter bank
structure with bilateral filter in fusing the multi sensor images. Shreyamsha Kumar[ ] described
image fusion using Cross Bilateral filter, where the sources are combined using weighted
average approach from the weights calculated from detail components which are obtained from
source image decomposition using Cross Bilateral filter.

Cross Bilateral Filter

Cross Bilateral filter [ ] is an edge preserving image smoothening filter like bilateral and guided filter.
Unlike bilateral filter, it employs one source image as guidance image to find the filter kernel for filtering
the other source image and vice-versa. If P,Q are two source images to be fused, It takes into account
both geometric closeness and gray-level similarities of neighboring pixels in an image P for shaping the
kernel of a filter in order to filter an image Q and vice-versa. Mathematically, the output of Cross Bilateral
Filter for image Q at pixel m can be expressed as

XXXXXXXXXXXXXXXXXXXXXXXXXXXX
Since the output of Cross Bilateral filter represents coarse layers of an image, the detail layers which
represent high frequency variations in an image can thus be obtained from subtracting the coarse layers
from the respective original image. If PCBF, Q CBF are the coarse layers obtained from Cross Bilateral Filter,
the respective detail layers are given as PD = P- PCBF and Q D= Q- QCBF. After getting the coarse and detail
layers for each of the source images, the fusion rules employed for combining respective coarse layers
and respective detail layers is presented below.

The flow diagram of proposed algorithm is as shown in Fig. .It contains a preprocessing block for
appropriate registration of source images as well as for obtaining the gray scale version of source images
from their RGB Components in case of color source images. The algorithm which illustrate the flow
diagram is explained below.

Algorithm of Proposed methodology:

Step 1: Consider two sources images P and Q.


Step 2: Perform RGB to gray scale conversion of source images if they are color images and resize the
Images to same dimension.

Step 3: Decompose each source image into coarse and detail layers using Cross Bilateral Filter.

Step 4: Fuse the coarse layers of source images using non-linear average fusion rule specified in
equation
Step 5: Fuse the detail layers of source images using a novel hybrid saliency based absolute maximum
rule specified in equation

Step 6: Obtain Fused image by combining fused coarse and detail images.
Fused image =FC+FD

Fusion rule for coarse layers


In literature, mostly the coarse layers are combined using an average combining rule given in equation 1,
which computes the pixel intensities in a fused image by taking an average between respective pixel
intensities of source images.

FC (i,j) = (PCBF(i,j) + QCBF(i,j))/2

Where FC is fused coarse component and M,N represents the row and column size of an image and i,j
specifies pixel location.

Since this rule combines the input images in equal proportion to produce the fused output, it is
unbiased. This unbiased nature of average rule creates problem when the source images have
high level of dissimilarity, where the saliency information of one source image may be highly
significant than the other source image. To avoid this problem, a non-linear average rule is
employed for fusing the coarse layers in this paper. As per this rule, the fused coarse component is
obtained as

FC (i,j) =ln( (ePCBF(i,j) + eQCBF(i,j))/2)


The non-linear nature of exponential can discriminate the more significant components from least
significant components, thus gives priority to more significant ones while suppressing the least
significant components during the averaging process.

Fusion rule for detail layers


In this paper, we propose a novel method for computing the hybrid saliency for detail layers of an
image.For each detail layer, hybrid saliency information is obtained by extracting image saliency using
Modified Spatial Frequency and also by computing the difference between median and Gaussian mean
of the detail layer for a 3x3 neighborhood. Modified Spatial Frequency measures the variation of each
pixel intensity with respect to its neighborhood and includes diagonal, row and column variations, thus
has the ability to capture the thin details in an image. It can be computed as follows:
MSF RF 2 CF 2 DF 2

pm,n pm,n1 2
M N
1
RF
M ( N 1) m 1 n 2

p pm 1,n
M N
1 2
CF m ,n
( M 1) N m 2 n 1

pm,n pm1,n1 + Pm1,n Pm,n1


M N M N
1 2 1 2
DF
( M 1)( N 1) m 2 n 2 ( M 1)( N 1) m 2 n 2

Where RF,CF,DF are the row, column and diagonal frequencies respectively. M,N are the row and column
dimensions of an image p. A pixel with higher MSF has better resolution.

Steps for hybrid saliency extraction


Step1: Compute saliency for each detail layer using MSF. Let the saliencies be S1 and S2.
Step2: Compute local median for 3x3 neighborhood of each detail layer and let them be Me1 and Me2.
Step3: Perform Gaussian image smoothening for 3x3 neighborhood of each detail layer and let them be
M1 and M2.
Step4: Compute saliency for each detail layer by calculating the difference between the magnitudes of
median and smoothened images. Let the saliencies be S3 and S4.

Where S3= Me1-M1 and S4= Me2-M2

Step 5: Compute the Structural Similarity (SSIM) [ ] between each source image and respective
saliencies.

E1=SSIM(P,S1) and E2=SSIM(P,S3)

E3=SSIM(Q,S2) and E4=SSIM(Q,S4)


Step 6: Compute weights for hybrid saliency based on structural similarity measures of Step 5 as
follows.

MSF Saliency Weight for first detail layer, W1=E1/E1+E2


Median-Mean Saliency Weight for first detail layer, W2=E2/E1+E2
MSF Saliency Weight for second detail layer, W3=E3/E3+E4

Median-Mean Saliency Weights for second detail layer, W4=E4/E3+E4

Step 7: Compute Hybrid saliency for detail layers using the following relations.

Hybrid Saliency for first detail layer, S5=W1.*S1+W2.*S3

Hybrid Saliency for second detail layer, S6=W3.*S2+W4.*S4

Once the saliency is obtained for each detail layers, an absolute-maximum rule is applied
on saliency maps to generate weight maps for detail layers as follows:

WD1 =1 and WD2=0 if S5>=S6


WD1=0 and WD2=1 else where

Where WD1 and WD2 are weight maps of first and second detail layers.

Fused detail component is obtained as FD= WD1.*PD+WD2.*QD

Fusion quality assessment parameters

Researchers have proposed several metrics to evaluate the fusion performance [ ]. In this Paper, image
quality assessment parameters like Average Pixel Intensity (API),Standard Deviation(SD),Average
Gradient(AG),Entropy(H),Mutual Information(MI),Fusion Symmetry(FS),Cross Correlation(CC),Spatial
Frequency(SF),Edge Strength(QABF) are considered to evaluate the performance of proposed fusion
methodology.

Performance Metric Mathematical formulae


Average Pixel Intensity Higher value of API produces an image with more contrast
(API)
Standard Deviation(SD)
Average Gradient (AG) An image will have more clarity if it contains high AG value since it measures
sharpness of an image.
Entropy (H) It estimates information content in an image. Large value of H indicates an
image with more information.
Mutual Information (MI)
Fusion Symmetry (FS) Estimates the symmetrical relation between fused and input images.Any
value of FS close to 2 indicates good symmetry.
Cross Correlation (CC) Computes relevance of fused image to input images.
Spatial Frequency (SF) It measures an activity level of an image. Higher value is desired.
Edge Strength(QABF) Its range is [0,1].Any value of QABF close to 1 indicates good edge
preservation in fused image.
4.Result Analysis:
The performance of proposed method is evaluated for various quality assessment metrics by performing
experiments on several benchmark images of medical, multispectral and multi-focus images taken from
www.imagefusion.org. In this paper, results for one pair of test images under medical, multispectral and
multi-focus are presented. Under medical category, results are provided for CT and MRI images of
normal brain while Infrared-Visible spectrum image pair (gun) is considered for multispectral category. A
multi-focus(office) image is considered to evaluate performance for multi-focus image fusion. All source
image data sets are perfectly registered and have the same dimension of 256x256.Further the proposed
method is compared with different existing methods [] in terms of previously specified performance
parameters for medical, multispectral and multi-focus images. Figure 4.1,4.2 and 4.3 shows the fused
images for multimodal, multi-focus and multi spectral image fusions respectively. In all the three figures,
(a) and (b) represents sources images while (c),(d),(e),(f),(g) and (h) represents fused images of proposed
,[ ],[ ],[ ],[ ],[ ] respectively. The values of various image quality assessment metrics for proposed and
several existing methods considered are presented in table for multimodal, multi-focus as well as multi
spectral image fusion. From table, it is evident that proposed algorithm has performed well in most of
the performance metrics that are measured and are superior than existing algorithms of fusion which
are considered for analysis. At same time visual perception of fused images of proposed method is also
better than that of considered existing algorithms.

Conclusion
In this paper, the detail and coarse layers are separated from source images using Cross bilateral filter. A
novel hybrid saliency based weight maps are used to fuse detail layers while coarse layers are fused
using non-linear average rule. The performance of the proposed algorithm has been evaluated in terms
of various image quality assessment parameters and also compared with several state-of-art fusion
mechanisms for multimodal, multi-focus and multi spectral applications. It is observed that the proposed
algorithm has done appreciably well than existing methods in most of the performance parameters
which shows the efficacy of the proposed algorithm.

Vous aimerez peut-être aussi