Vous êtes sur la page 1sur 58

CHAPTER 1

INTRODUCTION








1.INTRODUCTION

Resolution has been frequently referred as an important aspect of an image. Images are
being processed in order to obtain more enhanced resolution. One of the commonly used
techniques for image resolution enhancement is Interpolation. Interpolation has been widely used
in many image processing applications such as facial reconstruction, multiple description coding,
and super resolution. There are three well known interpolation techniques, namely nearest
neighbor interpolation, bilinear interpolation, and bicubic interpolation.
Image resolution enhancement in the wavelet domain is a relatively new research topic
and recently many new algorithms have been proposed . Discrete wavelet transform (DWT) is
one of the recent wavelet transforms used in image processing. DWT decomposes an image into
different subband images, namely low-low (LL), lowhigh (LH), high-low (HL), and high-high
(HH). Another recentwavelet transform which has been used in several image processing
applications is stationary wavelet transform (SWT). In short, SWT is similar to DWT but it does
not use down-sampling, hence the subbands will have the same size as the input image.
In this work, we are proposing an image resolution enhancement technique which
generates sharper high resolution image. The proposed technique uses DWT to decompose a low
resolution image into different subbands. Then the three high frequency subband images have
been interpolated using bicubic interpolation. The high frequency subbands obtained by SWT of
the input image are being incremented into the interpolated high frequency subbands in order to
correct the estimated coefficients. In parallel, the input image is also interpolated separately.
Finally, corrected interpolated high frequency subbands and interpolated input image are
combined by using inverse DWT (IDWT) to achieve a high resolution output image. The
proposed technique has been compared with conventional and state-of-art image resolu-tion
enhancement techniques. The conventional techniques used are the following:
interpolation techniques: bilinear interpolation and bicubic interpolation;
wavelet zero padding (WZP).
The state-of-art techniques used for comparison purposes are the following:
regularity-preserving image interpolation
new edge-directed interpolation (NEDI)
hidden Markov model (HMM)
HMM-based image super resolution (HMM SR)
WZP and cycle-spinning (WZP-CS)
WZP, CS, and edge rectification (WZP-CS-ER)
DWT based super resolution (DWT SR)
complex wavelet transform based super resolution (CWT SR)
According to the quantitative and qualitative experimental results, the proposed technique over
performs the aforementioned conventional and state-of-art techniques for image resolution
enhancement.








































CHAPTER 2



LITERATURE
SURVEY

















2. LITERATURE SURVEY
2.1 INTERPOLATION

One of the commonly used techniques for image resolution enhancement is Interpolation.
Interpolation has been widely used in many image processing applications such as facial
reconstruction, multiple description coding, and super resolution. There are three well known
interpolation techniques, namely nearest neighbor interpolation, bilinear interpolation, and
bicubic interpolation. Interpolation is the process of using known data values to estimate unknown data
values. Various interpolation techniques are often used in the atmospheric sciences.

2.2 NEAREST-NEIGHBOR INTERPOLATION:
Nearest-neighbor interpolation (also known as proximal interpolation or, in some
contexts, point sampling) is a simple method of multivariate interpolation in one or more
dimensions.
Interpolation is the problem of approximating the value for a non-given point in some
space when given some colors of points around (neighboring) that point. The nearest neighbor
algorithm selects the value of the nearest point and does not consider the values of neighboring
points at all, yielding a piecewise-constant interpolant. The algorithm is very simple to
implement and is commonly used (usually along with mipmapping) in real-time 3D rendering to
select color values for a textured surface.
2.3 BILINEAR INTERPOLATION:
In computer vision and image processing, bilinear interpolation is one of the basic
resampling techniques.
In texture mapping, it is also known as bilinear filtering or bilinear texture mapping, and
it can be used to produce a reasonably realistic image. An algorithm is used to map a screen pixel
location to a corresponding point on the texture map. A weighted average of the attributes (color,
alpha, etc.) of the four surrounding texels is computed and applied to the screen pixel. This
process is repeated for each pixel forming the object being textured.
When an image needs to be scaled up, each pixel of the original image needs to be moved
in a certain direction based on the scale constant. However, when scaling up an image by a non-
integral scale factor, there are pixels (i.e., holes) that are not assigned appropriate pixel values. In
this case, those holes should be assigned appropriate RGB or grayscale values so that the output
image does not have non-valued pixels.
Bilinear interpolation can be used where perfect image transformation with pixel
matching is impossible, so that one can calculate and assign appropriate intensity values to
pixels. Unlike other interpolation techniques such as nearest neighbor interpolation and bicubic
interpolation, bilinear interpolation uses only the 4 nearest pixel values which are located in
diagonal directions from a given pixel in order to find the appropriate color intensity values of
that pixel.
Bilinear interpolation considers the closest 2x2 neighborhood of known pixel values
surrounding the unknown pixel's computed location. It then takes a weighted average of these 4
pixels to arrive at its final, interpolated value. The weight on each of the 4 pixel values is based
on the computed pixel's distance (in 2D space) from each of the known points.
2.4 BICUBIC INTERPOLATION:
In mathematics, bicubic interpolation is an extension of cubic interpolation for
interpolating data points on a two dimensional regular grid. The interpolated surface is smoother
than corresponding surfaces obtained by bilinear interpolation or nearest-neighbor interpolation.
Bicubic interpolation can be accomplished using either Lagrange polynomials, cubic splines, or
cubic convolution algorithm.
In image processing, bicubic interpolation is often chosen over bilinear interpolation or nearest neighbor
in image resampling, when speed is not an issue. Images resampled with bicubic interpolation are
smoother and have fewer interpolation artifacts
CHAPTER 3

















SYSTEM
ANALYSIS


















3. SYSTEM ANALYSIS
3.1 EXISTING SYSTEM

Intuitively, we all know that the frequency is something to do with the change in rate of
something. If something (a mathematical or physical variable, would be the technically correct
term) changes rapidly, we say that it is of high frequency, where as if this variable does not
change rapidly, i.e., it changes smoothly, we say that it is of low frequency. If this variable does
not change at all, then we say it has zero frequency, or no frequency.
3.2 PROPOSED SYSTEM
In image resolution enhancement by using interpolation the main loss is on its high
frequency components (i.e., edges), which is due to the smoothing caused by interpolation. In
order to increase the quality of the super resolved image, preserving the edges is essential. In this
work, DWT has been employed in order to preserve the high frequency components of the
image. The redundancy and shift invariance of the DWT mean that DWT coefficients are
inherently interpolable .
In this correspondence, one level DWT (with Daubechies 9/7 as wavelet function) is
used to decompose an input image into different subband images. Three high frequency
subbands (LH, HL, and HH) contain the high frequency components of the input image. In the
proposed technique, bicubic interpolation with enlargement factor of 2 is applied to high
frequency subband images. Downsampling in each of the DWT subbands causes information
loss in the respective subbands. That is why SWT is employed to minimize this loss.
The interpolated high frequency subbands and the SWT high frequency subbands have
the same size which means they can be added with each other. The new corrected high frequency
subbands can be interpolated further for higher enlargement. Also it is known that in the wavelet
domain, the low resolution image is obtained by lowpass filtering of the high resolution image.
In other words, low frequency subband is the low resolution of the original image. Therefore,
instead of using low frequency subband, which contains less information than the original high
resolution image, we are using the input image for the interpolation of low frequency subband
image. Using input image instead of low frequency subband increases the quality of the super
resolved image. Fig. 1 illustrates the block diagram of the proposed image resolution
enhancement technique.







3.3 SYSTEM REQUIREMENTS
Hardware Requirements
Processor : > 2 GHz
Hard Disk : 80 GB
Ram : 512 MB

Software Requirements
Operating system : Windows 7 32 bit
Language : Java,
MAT LAB Version.7



Fig. 1. Block diagram of the proposed super resolution algorithm.






3.5 FEASIBILITY STUDY:
Technical Feasibility:
The technology used can be developed with the current equipment's and has the
technical capacity to hold the data required by the new system.
This technology supports the modern trends of technology.
Easily accessible, more secure technologies.
Technical feasibility on the existing system and to what extend it can support the proposed
addition.We can add new modules easily without affecting the Core Program. Most of parts are
running in the server using the concept of stored procedures.

Operational Feasibility:
This proposed system can easily implemented, as this is based on JSP coding (JAVA) &
HTML .The database created is with MySql server which is more secure and easy to handle.
The resources that are required to implement/install these are available. The personal of the
organization already has enough exposure to computers. So the project is operationally feasible.

Economical Feasibility:
Economic analysis is the most frequently used method for evaluating the effectiveness of a
new system. More commonly known cost/benefit analysis, the procedure is to determine the
benefits and savings that are expected from a candidate system and compare them with costs. If
benefits outweigh costs, then the decision is made to design and implement the system. An
entrepreneur must accurately weigh the cost versus benefits before taking an action. This system
is more economically feasible which assess the brain capacity with quick & online test. So it is
economically a good project













CHAPTER 4













SYSTEM
DESIGN












4.System Design
Modules
DIGITAL IMAGE PROCESSING
BACKGROUND:
Digital image processing is an area characterized by the need for extensive experimental
work to establish the viability of proposed solutions to a given problem. An important
characteristic underlying the design of image processing systems is the significant level of
testing & experimentation that normally is required before arriving at an acceptable solution.
This characteristic implies that the ability to formulate approaches &quickly prototype candidate
solutions generally plays a major role in reducing the cost & time required to arrive at a viable
system implementation.
What is DIP?
An image may be defined as a two-dimensional function f(x, y), where x & y are spatial
coordinates, & the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray
level of the image at that point. When x, y & the amplitude values of f are all finite discrete
quantities, we call the image a digital image. The field of DIP refers to processing digital image
by means of digital computer. Digital image is composed of a finite number of elements, each of
which has a particular location & value. The elements are called pixels.
Vision is the most advanced of our sensor, so it is not surprising that image play the single
most important role in human perception. However, unlike humans, who are limited to the visual
band of the EM spectrum imaging machines cover almost the entire EM spectrum, ranging from
gamma to radio waves. They can operate also on images generated by sources that humans are
not accustomed to associating with image.

an image?
An image is represented as a two dimensional function f(x, y) where x and y are spatial co-
ordinates and the amplitude of f at any pair of coordinates (x, y) is called the intensity of the
image at that point.
Image Types:
The toolbox supports four types of images:
1 .Intensity images
2. Binary images
3. Indexed images
4. R G B images
Most monochrome image processing operations are carried out using binary or intensity
images, so our initial focus is on these two image types. Indexed and RGB colour images
Intensity Images:
An intensity image is a data matrix whose values have been scaled to represent intentions.
When the elements of an intensity image are of class unit8, or class unit 16, they have integer
values in the range [0,255] and [0, 65535], respectively. If the image is of class double, the
values are floating _point numbers. Values of scaled, double intensity images are in the range [0,
1] by convention.

Binary Images:
Binary images have a very specific meaning in MATLAB.A binary image is a logical array
0s and1s.Thus, an array of 0s and 1s whose values are of data class, say unit8, is not considered
as a binary image in MATLAB .A numeric array is converted to binary using function logical.
Thus, if A is a numeric array consisting of 0s and 1s, we create an array B using the statement.
B=logical (A)
If A contains elements other than 0s and 1s.Use of the logical function converts all
nonzero quantities to logical 1s and all entries with value 0 to logical 0s.
Using relational and logical operators also creates logical arrays.
To test if an array is logical we use the I logical function:
islogical(c)
If c is a logical array, this function returns a 1.Otherwise returns a 0. Logical array can be
converted to numeric arrays using the data class conversion functions.
Indexed Images:
An indexed image has two components:
A data matrix integer, x.
A color map matrix, map.
Matrix map is an m*3 arrays of class double containing floating_ point values in the range
[0, 1].The length m of the map are equal to the number of colors it defines. Each row of map
specifies the red, green and blue components of a single color. An indexed images uses direct
mapping of pixel intensity values color map values. The color of each pixel is determined by
using the corresponding value the integer matrix x as a pointer in to map. If x is of class double
,then all of its components with values less than or equal to 1 point to the first row in map, all
components with value 2 point to the second row and so on. If x is of class units or unit 16, then
all components value 0 point to the first row in map, all components with value 1 point to the
second and so on.
RGB Image:
An RGB color image is an M*N*3 array of color pixels where each color pixel is triplet
corresponding to the red, green and blue components of an RGB image, at a specific spatial
location. An RGB image may be viewed as stack of three gray scale images that when fed in to
the red, green and blue inputs of a color monitor
Produce a color image on the screen. Convention the three images forming an RGB color
image are referred to as the red, green and blue components images. The data class of the
components images determines their range of values. If an RGB image is of class double the
range of values is [0, 1].
Similarly the range of values is [0,255] or [0, 65535].For RGB images of class units or unit
16 respectively. The number of bits use to represents the pixel values of the component images
determines the bit depth of an RGB image. For example, if each component image is an 8bit
image, the corresponding RGB image is said to be 24 bits deep.
Generally, the number of bits in all component images is the same. In this case the number
of possible color in an RGB image is (2^b) ^3, where b is a number of bits in each component
image. For the 8bit case the number is 16,777,216 colors
IMPROVED ALGORITHM (EXTENSION)

As it was mentioned before, resolution is an important feature in satellite imaging, which
makes the resolution enhancement of such images to be of vital importance as increasing the
resolution of these images will directly affect the performance of the system using these images
as input. The main loss of an image after being resolution enhanced by applying interpolation is
on its high-frequency components, which is due to the smoothing caused by interpolation.


Hence, in order to increase the quality of the enhanced image, preserving the edges is
essential. In this paper, DWT has been employed in order to preserve the high-frequency
components of the image.
DWT separates the image into different sub band images, namely, LL, LH, HL, and HH.
A high-frequency sub band contains the high frequency component of the image.
The interpolation can be applied to these four sub band images. In the wavelet domain,
the low-resolution image is obtained by low-pass filtering of the high-resolution image. The low
resolution image (LL sub band), without quantization (i.e., with double-precision pixel values) is
used as the input for the proposed resolution enhancement process.
In other words, low frequency sub band images are the low resolution of the original
image. Therefore, instead of using low-frequency sub band images, which contains less
information than the original input image, we are using this input image through the interpolation
process.
Hence, the input low-resolution image is interpolated with the half of the interpolation
factor, /2, used to interpolate the high-frequency sub bands, as shown in Fig. 5. In order to
preserve more edge information, i.e., obtaining a sharper enhanced image, we have proposed an
intermediate stage in high frequency sub band interpolation process. As shown in Fig. 5,the low-
resolution input satellite image and the interpolated LL image with factor 2 are highly correlated.
The difference between the LL sub band image and the low-resolution input image are in their
high-frequency components. Hence, this difference image can be use in the intermediate process
to correct the estimated high-frequency components. This estimation is performed by
interpolating the high-frequency sub bands by factor 2 and then including the difference image
(which is high-frequency components on low-resolution input image)into the estimated high-
frequency images, followed by another interpolation with factor /2 in order to reach the
required size for IDWT process. The intermediate process of adding the difference image,
containing high-frequency components,



DIGITAL IMAGE PROCESSING
BACKGROUND:
Digital image processing is an area characterized by the need for extensive experimental
work to establish the viability of proposed solutions to a given problem. An important
characteristic underlying the design of image processing systems is the significant level of
testing & experimentation that normally is required before arriving at an acceptable solution.
This characteristic implies that the ability to formulate approaches &quickly prototype candidate
solutions generally plays a major role in reducing the cost & time required to arrive at a viable
system implementation.
What is DIP?
An image may be defined as a two-dimensional function f(x, y), where x & y are spatial
coordinates, & the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray
level of the image at that point. When x, y & the amplitude values of f are all finite discrete
quantities, we call the image a digital image. The field of DIP refers to processing digital image
by means of digital computer. Digital image is composed of a finite number of elements, each of
which has a particular location & value. The elements are called pixels.
Vision is the most advanced of our sensor, so it is not surprising that image play the single
most important role in human perception. However, unlike humans, who are limited to the visual
band of the EM spectrum imaging machines cover almost the entire EM spectrum, ranging from
gamma to radio waves. They can operate also on images generated by sources that humans are
not accustomed to associating with image.
There is no general agreement among authors regarding where image processing stops &
other related areas such as image analysis& computer vision start. Sometimes a distinction is
made by defining image processing as a discipline in which both the input & output at a process
are images. This is limiting & somewhat artificial boundary. The area of image analysis (image
understanding) is in between image processing & computer vision.
There are no clear-cut boundaries in the continuum from image processing at one end to
complete vision at the other. However, one useful paradigm is to consider three types of
computerized processes in this continuum: low-, mid-, & high-level processes. Low-level
process involves primitive operations such as image processing to reduce noise, contrast
enhancement & image sharpening. A low- level process is characterized by the fact that both its
inputs & outputs are images. Mid-level process on images involves tasks such as segmentation,
description of that object to reduce them to a form suitable for computer processing &
classification of individual objects. A mid-level process is characterized by the fact that its inputs
generally are images but its outputs are attributes extracted from those images. Finally higher-
level processing involves Making sense of an ensemble of recognized objects, as in image
analysis & at the far end of the continuum performing the cognitive functions normally
associated with human vision.
Digital image processing, as already defined is used successfully in a broad range of areas
of exceptional social & economic value.
































CHAPTER 5












IMPLEMENTATION













Implementation code:

%%%IMAGE Resolution Enhancement by Using Discrete and Stationary Wavelet
Decomposition %%%

clc;
close all;
clear all;
%-------------------------------------------------------------------

img=uigetfile('.jpg','select the source image');
i=imread(img);
i=imresize(i,[256 256]);
imshow(i),title('Original low resolution image');
%-----------------------------------------------------------

if size(i,3)>1
i=rgb2gray(i);
end
g=im2double(i);

%==============Bi-linear interpolation========================
bil=my_interp(g,2);
bil=imresize(bil,[256 256]);
figure,imshow(mat2gray(bil)),title('bilinear interpolated image')


%-----------------------------------------------------------
%--------------bicubic interpolation------------------------
scale=2;
if scale>0
[r,c,h]=size(g);
tempim=uint8(zeros(r+4,c+4,h));
tempim(3:r+2,3:c+2,:)=g(:,:,:);
newim=uint8(zeros(floor(r*scale),floor(c*scale),h));
for i1=1:1:size(newim,1)
for j=1:1:size(newim,2)
for x_i=floor((i1-1)/scale)+1:1:floor((i1-1)/scale)+5
for y_j=floor((j-1)/scale)+1:1:floor((j-1)/scale)+5
x_d=cubicCalcDistance((i1-1)/scale+3,x_i);
y_d=cubicCalcDistance((j-1)/scale+3,y_j);
newim(i1,j,:)=newim(i1,j,:)+tempim(x_i,y_j,:)*x_d*y_d;
end
end
end
end
newim=imresize(newim,[256 256]);




figure,imshow(mat2gray(newim));
title('Bicubic interpolated image');
end


%---------------wavelet zero padding------------------------

l1=zeros(size(g));
l2=zeros(size(g));
l3=zeros(size(g));

w=idwt2(g,l1,l2,l3,'db7');
w=imresize(w,[256 256]);
figure,imshow(mat2gray(w));
title('Super resolved image using WZP')

%---------------proposed methodology------------------------

alpha=2;

a=my_interp(g,alpha/2);

%---------------apply DWT-----------------------------------

[LL1 LH1 HL1 HH1]=dwt2(g,'db7');
t=[LL1 LH1;HL1 HH1];
%figure(2),imshow(t,[]);title('Wavelet decomposed image with db7');

%---------------apply SWT-----------------------------------

[LL LH HL HH]=swt2(g,1,'db7');
s=[LL LH;HL HH];
%figure,imshow(s,[]);title('SWT decomposed image with db9');

%---------------interpolation with factor 2 ----------------
A1=my_interp(LH1,2);
A2=my_interp(HL1,2);
A3=my_interp(HH1,2);

%----------------------------------------------------------

y1=imresize(A1,size(LH));
y2=imresize(A2,size(HL));
y3=imresize(A3,size(HH));

%----------------------------------------------------------

a1=y1+LH; %% estimated LH
a2=y2+HL; %% estimated HL
a3=y3+HH; %% estimated HH

%----------------------------------------------------------
b1=my_interp(a1,alpha/2);
b2=my_interp(a2,alpha/2);
b3=my_interp(a3,alpha/2);

%----------------apply IWT---------------------------------

rt=idwt2(a,b1,b2,b3,'db7');
rt=imresize(rt,size(g));
figure,imshow(mat2gray(rt)),title('Proposed technique');

%%%---------------Extension Work------------------------------

[ll lh hl hh]=dwt2(g,'haar'); %% step 1
bic1=imresize(ll,2,'bicubic'); %% step 2
dif=g-bic1; %% step 3

%-------------------------------------------------------------

bic2=imresize(lh,2,'bicubic'); %% step 4
bic3=imresize(hl,2,'bicubic'); %% step 5
bic4=imresize(hh,2,'bicubic'); %% step 6

%-------------------------------------------------------------

est_lh=dif+bic2; %% Estimated LH
est_hl=dif+bic3; %% Estimated HL %% step 7
est_hh=dif+bic4; %% Estimated HH

%--------------------------------------------------------------

ll1=imresize(g,1,'bicubic');
ilh=imresize(g,1,'bicubic');
ihl=imresize(g,1,'bicubic'); %% step 8
ihh=imresize(g,1,'bicubic');

rimg=idwt2(ll1,ilh,ihl,ihh,'haar'); %% step 9
rimg=imresize(rimg,size(g));
figure,imshow(mat2gray(rimg)),title('Future technique');

%----------psnr calculation-------------------------------------

PSNR1=psnr(g,bil) %Bi-linear interpolated
PSNR2=psnr(g,im2double(newim)) %Bicubic interpolated
PSNR3=psnr(g,w) %using WZP
PSNR4=psnr(g,rt) %Proposed technique
PSNR5=psnr(g,rimg) %Future technique


CHAPTER 6

























TESTING





6.TESTING:
6.1 TEST PROCEDURE


SYSTEM TESTING:


Testing is performed to identify errors. It is used for quality assurance. Testing is an
integral part of the entire development and maintenance process. The goal of the testing during
phase is to verify that the specification has been accurately and completely incorporated into the
design, as well as to ensure the correctness of the design itself. For example the design must not
have any logic faults in the design is detected before coding commences, otherwise the cost of
fixing the faults will be considerably higher as reflected. Detection of design faults can be
achieved by means of inspection as well as walkthrough.

Testing is one of the important steps in the software development phase. Testing checks for
the errors, as a whole of the project testing involves the following test cases:

Static analysis is used to investigate the structural properties of the Source code.

Dynamic testing is used to investigate the behavior of the source code by executing the
program on the test data.


6.2 TEST DATA AND OUTPUT


6.2.1 UNIT TESTING:

Unit testing is conducted to verify the functional performance of each modular
component of the software. Unit testing focuses on the smallest unit of the software design (i.e.),
the module. The white-box testing techniques were heavily employed for unit testing.






6.2.2 FUNCTIONAL TESTING:

Functional test cases involved exercising the code with nominal input values for
which the expected results are known, as well as boundary values and special values, such as
logically related inputs, files of identical elements, and empty files.



Three types of tests in Functional test:

1. Performance Test

2. Stress Test

3. Structure Test

6.2.3 PERFORMANCE TESTING:

It determines the amount of execution time spent in various parts of the unit, program
throughput, and response time and device utilization by the program unit.


6.2.4 STRESS TESTING:

Stress Test is those test designed to intentionally break the unit. A Great deal can be
learned about the strength and limitations of a program by examining the manner in which a
programmer in which a program unit breaks.

6.2.5 STRUCTURED TESTING:

Structure Tests are concerned with exercising the internal logic of a program and
traversing particular execution paths. The way in which White-Box test strategy was employed
to ensure that the test cases could Guarantee that all independent paths within a module have
been have been exercised at least once.

Exercise all logical decisions on their true or false sides.

Execute all loops at their boundaries and within their operational bounds.

Exercise internal data structures to assure their validity.

Checking attributes for their correctness.


Handling end of file condition, I/O errors, buffer problems and textual errors in output
information

6.2.6 INTEGRATION TESTING:

Integration testing is a systematic technique for construction the program structure while
at the same time conducting tests to uncover errors associated with interfacing. i.e.,

integration testing is the complete testing of the set of modules which makes up the
product. The objective is to take untested modules and build a program structure tester should
identify critical modules. Critical modules should be tested as early as possible. One approach is
to wait until all the units have passed testing, and then combine them and then tested. This
approach is evolved from unstructured testing of small programs. Another strategy is to
construct the product in increments of tested units. A small set of modules are integrated
together and tested, to which another module is added and tested in combination. And so on. The
advantages of this approach are that, interface dispenses can be easily found and corrected.

The major error that was faced during the project is linking error. When all the modules
are combined the link is not set properly with all support files. Then we checked out for
interconnection and the links. Errors are localized to the new module and its
intercommunications. The product development can be staged, and modules integrated in as they
complete unit testing. Testing is completed when the last module is integrated and tested.


6.3 TESTING TECHNIQUES / TESTING STRATEGIES:

6.3.1 TESTING:

Testing is a process of executing a program with the intent of finding an error. A good
test case is one that has a high probability of finding an as-yet undiscovered error. A successful
test is one that uncovers an as-yet- undiscovered error. System testing is the stage of
implementation, which is aimed at ensuring that the system works accurately and efficiently as
expected before live operation commences. It verifies that the whole set of

programs hang together. System testing requires a test consists of several key activities
and steps for run program, string, system and is important in adopting a successful new system.
This is the last chance to detect and correct errors before the system is installed for user
acceptance testing.



The software testing process commences once the program is created and the documentation and
related data structures are designed. Software testing is essential for correcting errors. Otherwise
the program or the project is not said to be complete. Software testing is the critical element of
software quality assurance and represents the ultimate the review of specification design and
coding. Testing is the process of executing the program with the intent of finding the error. A
good test case design is one that as a probability of finding an yet undiscovered error. A
successful test is one that uncovers an yet undiscovered error. Any engineering product can be
tested in one of the two ways:

6.3.1.1 WHITE BOX TESTING

This testing is also called as Glass box testing. In this testing, by knowing the
specific functions that a product has been design to perform test can be conducted that
demonstrate each function is fully operational at the same time searching for errors in each
function. It is a test case design method that uses the control structure of the procedural design to
derive test cases. Basis path testing is a white box testing.

Basis path testing:

Flow graph notation

Cyclometric complexity

Deriving test cases

Graph matrices Control



6.3.1.2 BLACK BOX TESTING

In this testing by knowing the internal operation of a product, test can be
conducted to ensure that all gears mesh, that is the internal operation performs according to
specification and all internal components have been adequately exercised. It fundamentally
focuses on the functional requirements of the software.


The steps involved in black box test case design are:

Graph based testing methods

Equivalence partitioning

Boundary value analysis

Comparison testing


6.3.2 SOFTWARE TESTING STRATEGIES:

A software testing strategy provides a road map for the software developer. Testing is a
set activity that can be planned in advance and conducted systematically. For this reason a
template for software testing a set of steps into which we can place specific test case design
methods should be strategy should have the following characteristics:

Testing begins at the module level and works outward toward the integration of the entire
computer based system.

Different testing techniques are appropriate at different points in time.

The developer of the software and an independent test group conducts testing.

Testing and Debugging are different activities but debugging must be accommodated in
any testing strategy.



6.3.2.1 INTEGRATION TESTING:

Integration testing is a systematic technique for constructing the program
structure while at the same time conducting tests to uncover errors associated with. Individual
modules, which are highly prone to interface errors, should not be assumed to work instantly
when we put them together. The problem of course, is putting them together- interfacing.
There may be the chances of data lost across on anothers sub functions, when combined may
not produce the desired major function; individually acceptable impression may be magnified to
unacceptable levels; global data structures can present problems.



6.3.2.2 PROGRAM TESTING:

The logical and syntax errors have been pointed out by program testing. A syntax
error is an error in a program statement that in violates one or more rules of the language in
which it is written. An improperly defined field dimension or omitted keywords are common
syntax error. These errors are shown through error messages generated by the computer. A logic
error on the other hand deals with the incorrect data fields, out-off-range items and invalid
combinations. Since the compiler s will not deduct logical error, the programmer must examine
the output. Condition testing exercises the logical conditions contained in a module. The possible
types of elements in a condition include a Boolean operator, Boolean variable, a pair of Boolean
parentheses A relational operator or on arithmetic expression.






6.3.2.3 SECURITY TESTING:

Security testing attempts to verify the protection mechanisms built in to a system well, in
fact, protect it from improper penetration. The system security must be tested for invulnerability
from frontal attack must also be tested for invulnerability from rear attack. During security, the
tester places the role of individual who desires to penetrate system.



6.3.2.4 VALIDATION TESTING

At the culmination of integration testing, software is completely assembled as a
package. Interfacing errors have been uncovered and corrected and a final series of software test-
validation testing begins. Validation testing can be defined in many ways, but a simple definition
is that validation succeeds when the software functions in manner that is reasonably expected by
the customer. Software validation is achieved through a series of black box tests that
demonstrate conformity with requirement. After validation test has been conducted, one of two
conditions exists.

The function or performance characteristics confirm to specifications and are accepted.

A validation from specification is uncovered and a deficiency created.

Deviation or errors discovered at this step in this project is corrected prior to completion
of the project with the help of the user by negotiating to establish a method for resolving
deficiencies. Thus the proposed system under consideration has been tested by using validation
testing and found to be working satisfactorily. Though there were deficiencies in the system they
were not catastrophic.

6.3.2.5 USER ACCEPTANCE TESTING

User acceptance of the system is key factor for the success of any system.
The system under consideration is tested for user acceptance by constantly keeping
in touch with prospective system and user at the time of developing and making
changes whenever required.
This is done in regarding to the following points.

Input screen design.

Output screen design.

Menu driven system.



6.4 Test Cases:

















Test Case Check Field Objective Expected Result
TC-001 User request Response
TC-002 User Registration process Creating account
TC-003 User Enter id,pwd Entered values are correct then
success otherwise failure
TC-004 User Creating group id Creates groups
TC-005 Server Provides acess Sharing data
TC-007 User Enter wrong id.pwd Error page will display
TC-008 User Delete groups Group will be deleted



CHAPTER 7





















RESULTS






Screenshots and outputs























































































SIMULATION RESULTS

Fig. 2 shows that super resolved image of Baboons picture using proposed technique in
(d) are much better than the low resolutionimage in (a), super resolved image by using the
interpolation (b), and WZP (c). Note that the input low resolution images have been obtained by
down-sampling the original high resolution images. In order to show the effectiveness of the
proposed method over the conventional and state-of-art image resolution enhancement
techniques, four well-known test images (Lena, Elaine, Baboon, and Peppers) with different
features are used for comparison.
Table I compares the PSNR performance of the proposed technique using bicubic
interpolation with conventional and state-of-art resolution enhancement techniques: bilinear,
bicubic, WZP, NEDI, HMM, HMM SR, WZP-CS, WZP-CS-ER, DWT SR, CWT SR, and
regularity-preserving image interpolation. Additionally, in order to have more comprehensive
comparison, the performance of the super resolved image. The results in Table I indicate that the
proposed technique over-performs the aforementioned conventional and state-of-art image
resolution enhancement techniques. Table I also indicates that the proposed technique over-
performs the aforementioned conventional and state-of-art image resolution enhancement
techniques.
.

















Fig 1.Simulation results




CHAPTER 8





CONCLUSION &
FUTURE SCOPE
























CONCLUSION & FUTURE SCOPE
This work proposed an image resolution enhancement technique based on the
interpolation of the high frequency subbands obtained by DWT, correcting the high frequency
subband estimation by usingSWT high frequency subbands, and the input image. The proposed
technique uses DWT to decompose an image into different subbands, and then the high
frequency subband images have been interpolated. The interpolated high frequency subband
coefficients have been corrected by using the high frequency subbands achieved by SWT of the
input image. An original image is interpolated with half of the interpolation factor used for
interpolation the high frequency subbands. Afterwards all these images have been combined
using IDWT to generate a super resolved imaged. The proposed technique has been tested on
well-known benchmark images, where their PSNR and visual results show the superiority of
proposed technique over the conventional and state-of-art image resolution enhancement
techniques.








CHAPTER 9






REFERENCES





CHAPTER 11





JOURNAL PAPER
PUBLICATION













WORK