Vous êtes sur la page 1sur 82

A PROJECT REPORT

On

DIGITAL IMAGE ENHANCEMENT TECHNIQUES

Under the guidance of Dr. Akhilesh Yadav


Submitted by

Suraj Singh 521012294


in partial fulfillment o f the requirement for the award of the degree

Of

BCA-VIth Sem.

June-July-2013

Bonafide Certificate:

BONAFIDE CERTIFICATE
Certified that this project report titled DIGITAL ENHANCEMENT TECHNIQUES is the bonafide work of Suraj Singh who carried out the project work under my supervision.

SIGNATURE HEAD OF THE DEPARTMENT

SIGNATURE FACULTY IN CHARGE

ACKNOWLEDGEMENTS
It is immense pleasure we express our deep sense of gratitude to our guide and lecturer in Computer Center Dr. Akhilesh Yadav for the constant and valuable guidance extended to us during the course of our project work.

We are greatly thankful to the staff of the Skkim Manipal University for their patience and helping attitude. We are also thankful to our friends for their constructive criticism during the course of the project. Finally , we would like to thank everybody who directly or indirectly helped in making this project a success.

Suraj Singh

CONTENTS

1. ABSTRACT INTRODUCTION 2.1 Introduction to Digital Image Processing 2.2 Objectives of Filters 3. FILTERING TECHNIQUES 3.1 Spatial Domain Filters 3.2 Frequency Domain Filters 3.3 Types of Filters 4. SYSTEM SPECIFICATION 5. ABOUT JAVA 6. SOFTWARE DEVELOPMENT 6.1 Data Flow Diagrams 6.2 Hierarchical Charts 7. TESTING AND IMPLEMENTATION 8. CONCLUSION 9. OUTPUT SCREENS 10. BIBLIOGRAPHY
4

PROBLEM INTRODUCTION
Digital Image Processing methods stems from two principal application areas: Improvement of pictorial information for human interpretation. Processing of image data for storage, transmission and representation for autonomous machine perception.

For Image Processing three types of computerized processes are in use, Low_ Level Processes Mid _ Level Processes High_ Level Processes

In our project we are dealing about Low_ Level processes, which involves the primitive operations such as image preprocessing to reduce noise, contrast enhancement and image sharpening. It is characterized by the fact that both the input and output are images.

PROBLEM DEFINITION

Enhancement techniques are to process an image so that the result is more suitable than the original image. The images received from different means may not be clear. So we use these enhancements to improve the image according to our applications. The techniques used are 1. Negative 4. Level Slicing 7. LowPassFiltering 2. Grayscale 5. Contrast Stretching 8.HighPassFiltering 3. Rotate 6. Threshold 9. HighBoostFiltering

10. Histogram and Histogram Equalization

Whenever the image received is misaligned then we use the rotate techniques. When the image is noisy then lowpassfiltering technique is used. To highlight the image then we go for high boost filtering techniques. For graphical representation we use histogram techniques. For converting the color image to black and white image the gray level techniques is used.

1. ABSTRACT
The field of Digital Image Processing refers to processing digital images by means of digital computer. One of the main application areas in Digital Image Processing methods is to improve the pictorial information for human interpretation. Most of the digital images contain noise. This can be removed by many enhancement techniques. Filtering is one of the enhancement techniques which is used to remove unwanted information (noise) from the image. It is also used for image sharpening and smoothening. Some neighborhood operations work with the values of the image pixels in the neighborhood and the corresponding values of a sub image that has the same dimensions as the neighborhood. The sub image is called a filter. The aim of this project is to demonstrate the filtering techniques by performing different operations such as smoothening, sharpening, removing the noise etc. This project has been developed using Java language because of its universal acceptance and easy understandability.

2. INTRODUCTION
2.1. INTRODUCTION TO DIGITAL IMAGE PROCESSING: Interest in digital image processing methods stems from two principal application areas: improvement of pictorial information for human interpretation; and processing of image data for storage, transformation, and representation for autonomous machine perception. An image may be defined as a twodimensional function, f(x , y), where x and y are spatial coordinates, and the amplitude of f at any pair of coordinates (x , y) is called the intensity or gray level of the image at the point. When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of digital computer. Digital image is composed of finite number of elements, each of which has a particular location and value. These elements are referred to as picture elements, image elements, pels, and pixels. Pixel is the term most widely used to denote the elements of a digital image. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. Filters are one of digital image enhancement technique used to sharp the image and to reduce the noise in the image. There are two types of enhancement techniques called Spatial domain and Frequency domain techniques which are categorized again for smoothing and sharpening the images.

2.2 OBJECTIVES OF FILTERS: Smoothing filters are used for blurring and for noise reduction. Blurring is used in preprocessing steps, such as removal of small details from an image prior to object extraction and bridging of small gaps in lines or curves. Noise reduction can be accomplished by blurring with a linear filter and also by linear and also by non linear filtering. The principal objective of sharpening is to highlight fine detail in image or enhance detail that has been blurred, either in error or as a natural effect of a particular method of image acquisition. Uses of image sharpening vary and include applications ranging from electronic printing and medical imaging to industrial inspection and autonomous guidance in military systems.

3. FILTERING TECHNIQUES
Filtering is one of the Image Enhancement Techniques used for sharpening and smoothening the image thereby removing the noise from it. 3.1 SPATIAL FILTERING: Filtering operations that are performed directly on the pixels of an image, are referred as Spatial Filtering. The process of spatial filtering consists simply of moving the filter mask from point to point in an image. At each point(x, y), the response of the filter at that point is calculated using a predefined relationship. For linear spatial filtering the response is given by a sum of products of the filter coefficients and the corresponding image pixels in the area spanned by the filter mask. For the 3*3 mask the result R, of linear filtering with the filter mask at a point (x, y) in the image is R=w(-1, -1)f(x-1,y-1)+w(-1, 0)f(x-1, y)+.. +w(0,0)f(x, y)++w(1,0)f(x+1,y)+w(1,1)f(x+1,y+1), which we see is the sum of products of the mask coefficients with the corresponding pixels directly under the mask. In particular that the coefficient w(0,0) coincides with the image value f(x, y), indicating that the mask is centered at (x, y) when the computation of the sum of products takes place. For a mask of size m*n, we assume that m=2a+1 and n=2b+1, where a and b are nonnegative integers. All this says is that our focus in the following discussion will be on masks of odd sizes. With the smallest meaningful size being 3 * 3. The mechanics of spatial filtering is illustrated in the following figure.

10

Mask

Image f(x,y)

w(-1,-1)

w(-1,0)

w(-1,1)

w(0,-1)

w(0,0)

w(0,1)

x
w(1,-1) w(1,0) w(1,1)

f(x-1,y-1)

f(x-1,y)

f(x-1,y+1)

Mask coefficients, showing coordinate arrangement

f(x,y-1)

f(x,y)

f(x,y+1)

f(x+1,y-1)

f(x+1,y)

f(x+1,y+1)

Pixels of image section under mask

11

The process of linear filtering is similar to a frequency domain concept called convolution. For this reason linear spatial filtering often is referred to as convolving a mask with an image. Similarly, filter masks are sometimes called convolution masks. The term convolution kernel also is in common use. In terms of images, consider the pixels under a 3x3 mask:

3.1.1 Smoothing spatial filters: Smoothing filters are used for blurring and for noise reduction. Blurring is used in preprocessing steps, such as removal of small details from an image prior to object extraction, and bridging of small gaps in lines or curves. Noise reduction can be accomplished by blurring with a linear filter and also by non-linear filtering. Smoothing Linear Filters: The response of smoothing, linear spatial filter is simply the average of the pixels contained in the neighborhood of the filter mask. These filters sometimes are called averaging filters. They are also referred to as low pass filters. The idea behind smoothing filters is straightforward. By replacing the of every pixel in an image by the average of the gray levels in the neighborhood defined by the filter mask, this process results in an image with reduced sharp transitions in gray levels. Because random noise typically consists of sharp transitions in gray levels, the most obvious application of smoothing is noise reduction. However, edges also are characterized
12

by sharp transitions in gray levels, so averaging filters have the undesirable side effect that they blur edges. Another application of this type of process includes the smoothing of false contours that result for using an insufficient number of gray levels. A major use of averaging filters is in the reduction of irrelevant detail in an image. By irrelevant we mean pixel regions that are small with respect to the size of the filter mask. A spatial averaging filter in which all coefficients are equal is sometimes called a box filter. Smoothing Non-Linear Spatial filters: Non-linear Spatial filters are Order-statistics filters whose response is based on ordering(ranking) the pixels contained in the image area encompassed by the filter and then replacing the value of the center pixel with the value determined by the ranking result. The best known example in this category is the Median filter, which as its name implies replaces the value of a pixel by the median of the gray levels in the neighborhood of that pixel. 3.1.2 Sharpening Spatial Filters: The principal objective of sharpening is to highlight fine detail in an image or to enhance detail that has been blurred, either in error or as a natural effect of a particular method of image acquisition. Uses of image sharpening vary and include applications ranging from electronic printing and medical imaging to industrial inspection and autonomous guidance in military systems. In last section, we saw that image blurring could be accomplished in spatial domain by pixel averaging in a neighborhood. Since averaging is analogous to integration, it is logical to conclude that sharpening could be accomplished by spatial differentiation. This, in fact, is the case, and the discussion in this
13

section deals with various ways of defining and implementing operators for sharpening by digital differentiation. Fundamentally, the strength of the response of a derivative operator is proportional to the degree of discontinuity of the image at the point at which the operator is applied. Thus, image differentiation enhances edges and other discontinuities and deemphasizes areas with slowly varying gray-level values. 3.2 FILTERING IN THE FREQUENCY DOMAIN: 3.2.1 Basics of filtering in the frequency domain The fourier transform is composed of the sum of all values of the function f(x). The values of f(x), in turn, are multiplied by sines and cosines of various frequencies. The domain over which the values of F(u) range is appropriately called the Frequency Domain, because u determines the frequency of the components of the transform. Each of the M terms of F(u) is called Frequency Component of the transform. Use of the terms Frequency domain and Frequency components is really no different from the terms Time Domain and Time Components, which we would use to express the domain and values of f(x) if x where a time variable. Filtering in the frequency domain is straightforward. It consists of the following steps: 1. 2. 3. 4. 5. 6. Multiply the input image by (-1)x+y to center the transform. Compute F(u, v), the DFT of the image from(1). Multiply F(u, v) by a filter function H(u, v). Compute the inverse DFT of the result in (3). Obtain the real part of the result in (4). Multiply the result in (5) by (-1)x+y.

The reason that H(u,v) is called a filter is because it suppresses certain frequencies in the transformations.

14

Instead of using filter mask, can work in the frequency space using the convolution theorem. Application of the mask to each pixel (x,y) is basically a convolution process, so can get same results by multiplying the Fourier transforms of the image and the mask and then inverse Fourier transforming the product. The reason for this approach is that it is sometimes much easier to specify the filter in frequency space, and for masks of modest size (e.g. 7x7 or larger) it is faster to work with the Fourier transforms. In determining H(n,m) the transfer function which corresponds to h(x,y), the impulse function, the need to preserve phase requires that H(n,m) be real, i.e., no imaginary components. This implies that the inpulse function is symmetric: h(x,y) = h(-x,-y). In the interest of simplicity, the discussion here will assume circular symmetry, that is, H(n,m) => H() where 2 = n2 + m2. 3.2.2 Smoothing Frequency Domain Filters As indicated earlier edges and other sharp transitions in the gray levels of an image contribute significantly to the high frequency content of its Fourier transform. Hence smoothing is achieved in the frequency domain by attenuating a specified range of high-frequency components in the transform of a given image. Our basic model for filtering in the frequency domain is given by G(u, v) = H(u, v) F(u, v) Where F(u, v) is the fourier transform of the image to be smoothed. The objectives is to select a filter transfer

15

function H(u, v) that yields G(u, v) by attenuating the high frequency components of F(u, v).

3.2.3 Sharpening Frequency Domain Filters Image sharpening can be achieved in the frequency domain by a high pass filtering process, which attenuates the low-frequency components without disturbing high frequency information in the Fourier transform. The transfer function of the highpass filters are H hp (u, v) = 1- Hlp (u, v) Where Hlp (u, v) is the transfer function of the corresponding lowpass filter. That is when the lowpass filter attenuates frequencies, the highpass filter passes them and vice versa. 3.2.4 Fourier Transformation: The Fourier transform is linear and associative under addition, but is not associative under multiplication. Thus, Fourier methods are suitable for removing noise from images only when the noise can be modeled as additive term to the original image. However, if defects of the image, e.g., uneven lighting, have to be modeled as multiplicative rather than additive, direct application of Fourier methods is inappropriate. In terms of the illuminance and reflectance of an object, an image of the object might be modeled as f(x,y) = i(x,y)r(x,y). In this case, some way of converting multiplication into

16

addition must be employed before trying to apply Fourier filtering. The obvious way to do this is to take logarithms of both sides: q(x,y) = ln[r(x,y)i(x,y) + 1] = ln[i(x,y)] + ln[r(x,y)] where 1 has been added to the image values to avoid problems with ln[0]. 3.3 TYPES OF FILTERS: 3.3.1 Spatial Filters: a. Mean filters: It is a noise-reduction linear spatial filters. Three types of Mean Filters: 1. Arithmetic Mean Filter 2. Geometric Mean Filter 3. Harmonic Mean Filter 4. Contraharmonic Mean Filter 1. Arithmetic Mean Filter This is the simplest of the mean filters. Let S x,y represent the set of coordinates in a rectangular subimage window of size m*n, centered at point(x, y). The arithmetic mean filtering process computes the average value of the corrupted image g(x, y) in the area defined by Sxy. The value of the restored image f^ at any point (x, y) is simply the arithmetic mean computed using the pixels in the region defined by Sxy. A mean filter simply smoothes local variations in an image. Noise is reduced as a result of blurring.
17

Arithmetic filters are well suited for random noise like Gaussian or uniform noise. Example: Original image with sharp edge and one outlier:

Image after filtering with a mean filter:

b. Ranking Filter: Order statistics filters are spatial filters whose response is based on ordering or Ranking the pixels contained in the image area encompassed by the filter. The response of the filter at any point is determined by the ranking result. c. Median Filter: The best know order statistics filter is the median filter, which as its name implies, replaces the value of a pixel by the median of the gray levels in the neighborhood of that pixel:
18

F^(x, y) = median(s,t)eSx,y{g(s,t)} The original value of the pixel is included in the computation of the median. Median filters are quite popular because, for certain types of random noise, they provide excellent noise-reduction capabilities, with considerably less blurring than linear smoothing filters of similar size. Median filters are particularly effective in the presence of both bipolar nad unipolar impulse noise. In fact, the median filter yields excellent results for images corrupted by this type of noise. Median filters are quite popular because for certain types of random noise, they provide excellent noise reduction capabilities, with considerably less blurring than linear smoothing filters of similar size. Median filters are particularly effective in the presence of impulse noise, also called salt and pepper noise because of its appearance as white and black dots superimposed on an image. Example: Original image with sharp edge and one outlier:

Image after filtering with a median filter:

19

Although the median filter is by far the most useful order statistics filter in image processing, it is by no means the only one. The median represents the 50th percentile of a ranked set of numbers, but the reader will recall from basic statistics that ranking lends itself to may other possibilities. Using 100th percentile results in the max filter which is useful in finding brightest points in an image. 0th percentile filter is the min filter, used for opposite purpose. d. Max and Min Filters: Although the median filter is by far the most useful order statistics filter in image processing, it is by no means the only one. The median represents the 50th percentile of a ranked set of numbers, but the reader will recall from basic statistics that ranking lends itself to may other possibilities. Using 100th percentile results in the max filter which is useful in finding brightest points in an image. 0th percentile filter is the min filter, used for opposite purpose. Max filter is given by f^ (x, y) = max(s,t)eSx,y{g(s,t)}. This filter is useful for finding the brightest points in an image. Also, because pepper noise has very low values, it is reduced by this filter as a result of the max selection process in the subimage area Sx,y. The 0th percentile filter is MinFilter: f^ (x, y) = min(s,t)eSx,y{g(s,t)}. This filter is useful for finding the darkest points in an image. Also, it reduces salt noise as a result of the min operations.
20

e.Minimum Mean Square Error (wiener) Filtering Image Restoration: If an image f(x,y) is degraded going through an optical system and the detected image g(x,y) represents the effect of the point function h(x,y) of the system, then in the frequency domain the process can be represented by G = HF, where it is assumed that there is no noise. If it is further assumed that H(w,z) is either known or can be determined, then it is possible to regain the original image by the process

All of this work is done in the frequency domain and the result Fourier transformed back to real space. The idea is good, however, this process is very susceptible to noise (although a more complicated effort using Wiener filters might help if there is noise) and demands very accurate knowledge of the transfer function H. The method is founded on considering images and noise as random processes, and the objective is to find an estimate f^ of the uncorrupted image f such that the mean square error between them is minimized. This error measure is given by e2 = E { (f f^ )2 } where E{.} is the expected value of the argument. It is assumed that the noise and the image are uncorrelated; that one or the other has zero mean; and that the gray levels in the estimate are a linear function of the levels in the degraded image.
21

3.3.2 Frequency Filters: Low frequencies in the fourier transform are responsible for the general gray-level appearance of an image over smooth areas, while high frequencies are responsible for detail, such as edges and noise. A filter that attenuates high frequencies while Passing low frequencies is called a lowpass filter. A filter that has opposite characteristic is appropriately called a highpass filter. a. Ideal Lowpass Filters: The simplest lowpass filter we can envision is a filter that cuts off all high frequency components of the Fourier transform that are at a distance greater than a specified distance Do from the origin of the (centered) transform. Such a filter is called a two dimensional (2-D) ideal lowpass filter(ILPF) and has the transfer function 1 H(u, v ) = 0 if D(u, v)> Do if D(u, v)<= Do

Where Do is a specified nonnegative quantity, and D(u, v) is the distance from point (u, v) to the origin of the frequency rectangle. If the image in question is of size M * N , we know that its transform also is of this size, so the center of the frequency rectangle is at(u, v) = (M/2, N/2) due to the fact that the
22

transform has been centered. In this case, the distance from any point (u, v ) to the center (origin) of the Fourier transform is given by D(u, v) = [(u-M/2)2 + (v-N/2)2]1/2\ b. Ideal Highpass Filter: A 2-D ideal highpass filter (IHPF) is defined as 0 H(u, v) = 1 if D(u, v) >Do if D(u, v) <=Do

where Do is the cutoff distance measured from the origin of the frequency rectangle, and D(u, v). This filter is the opposite of the ideal lowpass filter in the sense that it sets to zero all frequencies inside a circle of radius Do while passing, without attenuation, all frequencies outside the circle. As in the case of the ideal lowpass filter, the IHPF is not physically realizable with electronic components. However, since it can be implemented in a computer, we consider it for completeness. b. Butterworth Lowpass Filters: The transfer function of a Butterworth lowpass filter (BLPF) of order n, and with cutoff frequency at a distance Do from the origin, is defined as H( u, v)= 1/1+[D(u, v)/Do]2n Unlike the ILPF, the BLPF function does not have a sharp discontinuity that establishes a clear cutoff between passed and filtered frequencies. For filters with smooth transfer functions,
23

defining a cutoff frequency locus at points for which H(u, v) is down to certain fraction of its maximum value is customary. c. Butterworth HighPass Filter: The transfer function of the Butterworth highpass filter (BHPF) of order n and with cutoff frequency locus at a distance Do from the origin is given by H(u, v) = 1/1+[Do/D(u, v)]2n As in the case of lowpass filters, we can expect Butterworth highpass filters to behave smoother than IHPFs. d. High boost Filter: A process used for many years in the publishing industry to sharpen images consists of subtracting a blurred version of an image from the image itself. This process, is called Unsharp masking, is expressed as fs(x, y) = f(x, y) f1(x,y) Where fs(x,y) denotes the sharpened image obtained by unsharp masking, and f1(x, y) is blurred version of f(x, y). The origin of unsharp masking is in dark room photography, where it consists of clamping together a blurred negative to a corresponding positive film and then developing this combination to produce a sharper image. A slight further generalization of unsharp masking is called high-boost filtering. A high boost filtered image, fhb, is defined at any point (x, y) as fhb (x, y) = Af(x, y) - f1(x,y)
24

4. SYSTEM SPECIFICATION
SOFTWARE ENVIRONMENT: Operating system: windows 98/XP Tool: Java Frames/ Java Gel HARDWARE ENVIRONMENT: Processor : Pentium III/ Core to Due RAM : 64 MB Harddisk : 2.1GB Processor speed : 512 MHZ

25

5.
JAVA AND ITS BASICS :

ABOUT JAVA

Java represents the end result of nearly 15 years of trying to come up with a better programming language and environment for building simpler and more reliable software. Sun Microsystems confounded Bill Joy on JAVA Java is small, simple, safe, object-oriented, interpreted of dynamically optimized byte-coded, architectural-neutral, garbage collected, multithreaded programming language with a strongly typed exception - handling mechanism for writing distributed, dynamically extensible programs. A Java program is created as a text file with the file extension ".java". It is compiled into one or more files of bytecodes with the extension ".class". Byte-codes are a set of instructions similar to the machine code instructions created when a computer program is compiled. The difference is the machine code must run on the computer system it was compiled for where as byte codes can run on any computer system equipped to handle java programs.

26

FEATURES OF JAVA Simple, Object -Oriented Easier: Primary characteristics of Java include a simple language that can be programmed without extensive training while being attuned to current software practices. Java is designed to be object-oriented from ground up. Robustness : Java is robust, that the code is well behaved and needed when the solid application that wont bring down a system when a user stumbles across a home page with small animation. It provides extensive compile time checking , followed by second level of runtime checking. The memory management model no pointers or pointer arithmetic eliminates entire classes of programming errors that be available in C and C++ programmers. Secure: Java is a secure, has to protect the client against the unintentional attacks and protects also against intentional ones as well. Java is designed to operate in distributed environment, which means that security is of paramount importance. With security features designed into language and run-time systems, Java lets us construct application that cannot be included from outside. Architectural - Neutral : Java is designed to support applications that will be developed into heterogeneous networked environment. To accommodate the diversity of operating environments, Java compiler generates bytecodes, an architectural neutral intermediate format
27

designed to transport code efficiency to multiple hardware and software platforms. The interpreted nature of Java solves both the binary distribution and version problems. The same Java language programs will run on any platform. Java is portable, that it can run on any machine that has Java interpreter ported to it. Architecture neutrality is just one part of a truly portable system. The architecture neutral and portable language environment of Java is known as the Java virtual machine. Multithreaded : Multithreading is the ability of one program to do more than one thing at once, example printing while getting a fax. Java language provides the threads class, and the run time system provides monitor and condition lock primitives. Java offloads the implementation of multithreading to the underlying O.S. High Performance : Performance is always a consideration. Java achieves interior performance by adapting a scheme by which the interpreter can run at full speed without needing to check the run time environment. The automatic garbage collector insurers a high probability that memory is available when required, leading to better performance. Applications requiring a large amount of computing power can be designed by native machine codes as required and uses reusability. In general the users perceive interactive applications respond quickly ever through they are interpreted. The environment takes over many of the error prone tasks from the programmer such as pointers and memory management.
28

6.

SOFTWARE DEVELOPMENT

SYSTEM DESIGN

Context level Diagram


Image enhancement techniques

user

user

The User select the image on which enhancements are to be performed. The result after enhancements goes to the User

29

FIRST LEVEL DIAGRAM

Exit File Open User Menus User

Image

The Menu contains two options File and Image. The File contains two options Open to select an image and Exit to come out from the window. After opening an image, the Image option is selected to perform enhancements

30

Second Level Diagram

Negate

Rotate

User

Image

Gray scale

User

Contrast

Histogram

The Image menu contains operations of enhancements such as Negate the image, rotate the image, gray scale image, perform contrast stretching and finally Histogram techniques

31

Hierarchical Chart:
Spatial Domain Filters Mean Filter Median Filter MMSE Filter Rank Filter Min Filter High Boost Filter Max Filter

32

Frequency Domain Filters Lowpass Filter Butter worth high pass Filter High pass Filter Butter worth Low pass Filter Ideal Low pass Filter Ideal High pass Filter

33

7. TESTING AND IMPLEMENTATION


Testing: Testing the newly developed or modified systems is one of the most important activities in the system development methodology. The goal of testing is to verify the logical and physical operations of the design blocks to determine that they operate as intended. Black Box Testing: Black box testing alludes to tests that are conducted at the software interface. These are used to demonstrate these software functions operational that input is properly accepted output are correctly produced. White Box Testing: It is predicted on close examinations of procedural detail logical providing test cases that exercise specific sets of conditions and / or loops tests paths through the software. Basis path testing is a white box testing technique. The basis path method enables the test case designer to derive a logical complexity of a procedural design and use this measure as a guide for defining as basis set of execution paths. Test cases derived to exercise the basis set are guaranteed to execute every statement in the program at least one time during testing. Testing is a system that follows the strategies given below:

34

Unit Testing: During the implementation of the system each module of the system is tested separately to uncover errors within its boundaries. User interfaces are used as a guide in the process. Integration Test: Integration testing is a systematic technique for constructing the program structure while at the some time conducting tests to uncover errors associated with interfacing. The objective is to take unit tested modules and build a program that has been dictated by design. All the modules are combined in advance. The entire program tested as a whole. Present developed software is tested using bottom integration begins construction and testing with atomic modules. Low-level modules are combined into clusters and driver was written taco-ordinate test case input output. The cluster is tested. Drivers are removed and clusters are combined moving upward in the program structure.

Validation test: At the culmination of integration testing, software is completely assembled as a package, interfacing errors have been uncovered and corrected and a final series of software tests validation testing may begin. Software validation was achieved through a series of black box test that demonstrate conformity with the requirements. Validation test was succeeded because the software functions for all different inputs given.
35

System test: System testing is actually a series of different test whose primary purpose is to fully exercise the computer-based system. Include Recovery Testing during crashes, Security Testing for unauthorized user etc.

36

8. CONCLUSION
The objective of the project is to smooth and sharp the images by using various Filtering techniques. Where Filtering techniques are one of the enhancement techniques in the Digital image processing. Here in the project I had implemented few spatial domain filters and frequency domain filters. Where spatial domain filters removes the noise and blurs the image. And frequency domain filters are used to sharpen the inside details of an image. Filters are useful in many application areas as medical diagnosis, Army and Industrial areas.

37

9. OUTPUT SCREENS

38

39

40

41

42

43

44

45

46

SOURCE CODE

47

CODING
//V //MAIN PROGRAM // pck.java import java.applet.*; import java.awt.*; import java.awt.Event; import java.awt.image.*; public class pck extends myframe { Insets insets; Image src,dst; ImageProducer filtered; String filename=null; public pck() {}

public void paint(Graphics g) { if(filename!=null) //g.drawImage(dst,insets.left,insets.top,this); g.drawImage(dst,insets.left+50,insets.top+50,this); }

public static void main(String a[]) { pck f=new pck(); f.show(); }

public void abc() { if(filename!=null) { src=Toolkit.getDefaultToolkit().getImage(filename);

48

MediaTracker mt=new MediaTracker(this); mt.addImage(src,0); try { mt.waitForID(0); }catch(Exception e) { System.out.println("Image Loading Error!"+e); System.exit(0); } dst=src; repaint(); } }

public void addNotify() { super.addNotify(); insets=getInsets(); setBounds(50,50,500+insets.left,350+insets.top); }

public void open() { FileDialog fd=new FileDialog(this,"Open",FileDialog.LOAD); fd.setVisible(true); String dir=fd.getDirectory(); String fname=fd.getFile(); filename=dir+fname; abc(); }

public void org() { if(filename!=null)

49

{ dst=src; repaint(); } }

public void ls() { if(filename!=null) { int iw,ih; int j1,j2,j3,j4; int pixels[]; int out[]; int errflag=0; iw=dst.getWidth(null); ih=dst.getHeight(null); pixels=new int[iw*ih]; try { PixelGrabber pg=new PixelGrabber(dst,0,0,iw,ih,pixels,0,iw); pg.grabPixels(); }catch(InterruptedException e){};

mydialog md=new mydialog(this,"Level Slicing",true,1); md.setSize(250,180); md.setVisible(true);

if((md.val1!=null)&&(md.val2!=null)&&(md.val3!=null)&&(md.val4!=null)) { j1=Integer.parseInt(md.val1); if((j1<0)||(j1>255)) errflag=1;

j2=Integer.parseInt(md.val2); if((j2<0)||(j2>255))

50

errflag=1;

j3=Integer.parseInt(md.val3); if((j3<0)||(j3>255)) errflag=1;

j4=Integer.parseInt(md.val4); if((j4<0)||(j4>255)) errflag=1;

if(errflag==0) { LevelSlicing level=new LevelSlicing(j1,j2,j3,j4); out=level.slicedImage(pixels,iw,ih); dst=createImage(new MemoryImageSource(iw,ih,out,0,iw)); } else { errdialog ed=new errdialog(this,"Error!",true); ed.setSize(200,100); ed.setVisible(true); } } repaint(); } }

public void cs() { if(filename!=null) { int iw,ih; int j1,j2,j3,j4; int pixels[]; int out[]; int errflag=0; iw=dst.getWidth(null);

51

ih=dst.getHeight(null); pixels=new int[iw*ih]; try { PixelGrabber pg=new PixelGrabber(dst,0,0,iw,ih,pixels,0,iw); pg.grabPixels(); }catch(InterruptedException e){};

mydialog md=new mydialog(this,"Contrast Stretching",true,2); md.setSize(250,180); md.setVisible(true);

if((md.val1!=null)&&(md.val2!=null)&&(md.val3!=null)&&(md.val4!=null)) { j1=Integer.parseInt(md.val1); if((j1<0)||(j1>255)) errflag=1;

j2=Integer.parseInt(md.val2); if((j2<0)||(j2>255)) errflag=1;

j3=Integer.parseInt(md.val3); if((j3<0)||(j3>255)) errflag=1;

j4=Integer.parseInt(md.val4); if((j4<0)||(j4>255)) errflag=1;

if(errflag==0) { contrast c=new contrast(j1,j2,j3,j4); out=c.csimage(pixels,iw,ih); dst=createImage(new MemoryImageSource(iw,ih,out,0,iw)); } else

52

{ errdialog ed=new errdialog(this,"Error!",true); ed.setSize(200,100); ed.setVisible(true); } } repaint(); } }

public void lp() { if(filename!=null) { int[] filter={1,1,1, 1,1,1, 1,1,1}; double multiplier=0.11111; convfilter cv=new convfilter(filter); filtered=cv.filteredImage(dst,multiplier); dst=createImage(filtered); repaint(); } }

public void hp() { if(filename!=null) { int[] filter={-1,-1,-1, -1,8,-1, -1,-1,-1}; double multiplier=1; convfilter cv=new convfilter(filter); filtered=cv.filteredImage(dst,multiplier);

53

dst=createImage(filtered); repaint(); } }

public void hb() { if(filename!=null) { int[] filter={-1,-1,-1, -1,9,-1, -1,-1,-1}; double multiplier=1; convfilter cv=new convfilter(filter); filtered=cv.filteredImage(dst,multiplier); dst=createImage(filtered); repaint(); } }

public void hist() { if(filename!=null) { int iw,ih; int pixels[]; int out[]; iw=dst.getWidth(null); ih=dst.getHeight(null); pixels=new int[iw*ih]; try { PixelGrabber pg=new PixelGrabber(dst,0,0,iw,ih,pixels,0,iw); pg.grabPixels(); }catch(InterruptedException e){}; Histogram h=new Histogram(this,"Histogram");

54

h.set(pixels,iw,ih); h.setSize(320,480); h.setVisible(true); } }

public void thr() { if(filename!=null) { int iw,ih; int pixels[]; int out[]; int j1; iw=dst.getWidth(null); ih=dst.getHeight(null); pixels=new int[iw*ih]; try { PixelGrabber pg=new PixelGrabber(dst,0,0,iw,ih,pixels,0,iw); pg.grabPixels(); }catch(InterruptedException e){};

mydialog md=new mydialog(this,"Thresholding",true,0); md.setSize(200,130); md.setVisible(true);

if(md.val1!=null) { j1=Integer.parseInt(md.val1); if((j1<0)||(j1>255)) { errdialog ed=new errdialog(this,"Error!",true); ed.setSize(200,100); ed.setVisible(true); } else

55

{ thresholding level=new thresholding(j1); out=level.tsimage(pixels,iw,ih); dst=createImage(new MemoryImageSource(iw,ih,out,0,iw)); } } //MODIFIED repaint(); } }

public void heq() { if(filename!=null) { int iw,ih; int pixels[]; int out[]; iw=dst.getWidth(null); ih=dst.getHeight(null); pixels=new int[iw*ih]; try { PixelGrabber pg=new PixelGrabber(dst,0,0,iw,ih,pixels,0,iw); pg.grabPixels(); }catch(InterruptedException e){}; he heq=new he(); out=heq.equalize(pixels,iw,ih); dst=createImage(new MemoryImageSource(iw,ih,out,0,iw)); repaint(); } } public void neg() { if(filename!=null) {

56

int iw,ih; int pixels[]; int out[]; iw=dst.getWidth(null); ih=dst.getHeight(null); pixels=new int[iw*ih]; try { PixelGrabber pg=new PixelGrabber(dst,0,0,iw,ih,pixels,0,iw); pg.grabPixels(); }catch(InterruptedException e){}; Negator negator=new Negator(); out=negator.negatepixels(pixels,iw,ih); dst=createImage(new MemoryImageSource(iw,ih,out,0,iw)); repaint(); } }

public void gs() { if(filename!=null) { int iw,ih; int pixels[]; int out[]; iw=dst.getWidth(null); ih=dst.getHeight(null); pixels=new int[iw*ih]; try { PixelGrabber pg=new PixelGrabber(dst,0,0,iw,ih,pixels,0,iw); pg.grabPixels(); }catch(InterruptedException e){}; grey g=new grey(); out=g.greyimage(pixels,iw,ih); dst=createImage(new MemoryImageSource(iw,ih,out,0,iw)); repaint();

57

} }

public void rotate() { if(filename!=null) { rotate1 rt=new rotate1(); dst=rt.rimage(dst); repaint(); } } }

58

ROTATE.JAVA

import java.io.*; import java.awt.*; import java.awt.event.*; import java.awt.image.*; import java.awt.Image; import java.awt.geom.*;

public class rotate { int iw,ih; BufferedImage bimage=null,simage=null; public rotate() {}

public Image rimage(Image img) { GraphicsEnvironment ge=GraphicsEnvironment.getLocalGraphicsEnvironment(); try { int transparency=Transparency.OPAQUE; GraphicsDevice gs=ge.getDefaultScreenDevice(); GraphicsConfiguration gc=gs.getDefaultConfiguration(); bimage=gc.createCompatibleImage(img.getWidth(null),img.getHeight(null),transparency); }catch(Exception e){}

if(bimage==null) { int type=BufferedImage.TYPE_INT_RGB; bimage=new BufferedImage(img.getWidth(null),img.getHeight(null),type); } Graphics g=bimage.createGraphics(); g.drawImage(img,0,0,null); g.dispose();

59

AffineTransform tx=new AffineTransform(); tx.rotate(0.125,bimage.getWidth()/2,bimage.getHeight()/2);

AffineTransformOp op=new AffineTransformOp(tx,AffineTransformOp.TYPE_BILINEAR); bimage=op.filter(bimage,null);

return Toolkit.getDefaultToolkit().createImage(bimage.getSource()); } }

60

// NEGATOR.JAVA FOR NEGATIVE

import java.applet.*; import java.awt.*; import java.awt.image.*;

public class Negator { int outpixels[]; public void Negator() {}

public int[] negatepixels(int inpixels[],int width,int height) { outpixels=new int[width*height]; for(int i=0;i<(width*height);i++) { int p=inpixels[i];

int r=0xff&(p>>16); r=(int)255-r;

int g=0xff&(p>>8); g=(int)255-g;

int b=0xff&(p); b=(int)255-b; outpixels[i]=(255<<24)|(r<<16)|(g<<8)|b; } return outpixels; } }

61

GREY.JAVA

import java.awt.*; import java.awt.image.*; import java.applet.*;

public class grey { int outpixels[]; public grey() {} public int[] greyimage(int[] inpixels,int width,int height) { outpixels=new int[width*height]; for(int i=0;i<(width*height);i++) { int p=inpixels[i]; int r=0xff&(p>>16); int g=0xff&(p>>8); int b=0xff&(p); int y=(int)(.33*r+.56*g+.11*b);

r=(int)y; g=(int)y; b=(int)y;

outpixels[i]=(255<<24)|(r<<16)|(g<<8)|b; } return outpixels; } }

62

// CONTRACT.JAVA import java.awt.*; import java.awt.image.*; import java.applet.*;

public class contrast { int r1,s1,r2,s2; int slp1,slp2,slp3; public int outpixels[];

public contrast(int sd1,int sd2,int sd3,int sd4) { r1=sd1; s1=sd2; r2=sd3; s2=sd4;

slp1=(int)(s1/r1); slp2=(int)((s2-s1)/(r2-r1)); slp3=(int)((255-s2)/(255-r2)); System.out.println("the value of spl1 is" +slp1); System.out.println("the value of spl2 is" +slp2); System.out.println("the value of spl3 is" +slp3); }

public int[] csimage(int[] inpixels,int width,int height) { outpixels=new int[width*height]; for(int i=0;i<(width*height);i++) { int p=inpixels[i]; //System.out.println("the value of p is" +p); int r=0xff&(p>>16);

63

//System.out.println("the value of r is" +r); if((r>0)&&(r<=r1)) r=(int)(slp1*r);

else

if((r>r1)&&(r<r2)) r=(int)(slp2*(r-r1)+s1);

else

if((r>=r2)&&(r<=255)) r=(int)(slp3*(r-r2)+s2);

int g=0xff&(p>>8);

if((g>=0)&&(g<=r1)) g=(int)(slp1*g);

else

if((g>r1)&&(g<r2)) g=(int)(slp2*(g-r1)+s1);

else

if((g>=r2)&&(g<=255)) g=(int)(slp3*(g-r2)+s2);

int b=0xff&(p);

if((b>=0)&&(b<=r1)) b=(int)(slp1*b);

else

if((b>r1)&&(b<r2))

64

b=(int)(slp2*(b-r1)+s1);

else

if((b>=r2)&&(b<=255)) b=(int)(slp3*(b-r2)+s2);

outpixels[i]=(255<<24)|(r<<16)|(g<<8)|b; }

return outpixels; } }

65

// CONVERTER.JAVA

import java.awt.*; import java.awt.Component; import java.awt.image.*; class convfilter implements ImageObserver { int[] oldpixels,newpixels; int w,h; PixelGrabber pg; MemoryImageSource mis; int index=0; int i00,i01,i02,i10,i11,i12,i20,i21,i22; int p00,p01,p02,p10,p11,p12,p20,p21,p22; int w00,w01,w02,w10,w11,w12,w20,w21,w22;

public convfilter(int[] matrix) { w00=matrix[0]; w01=matrix[1]; w02=matrix[2]; w10=matrix[3]; w11=matrix[4]; w12=matrix[5]; w20=matrix[6]; w21=matrix[7]; w22=matrix[8]; } public boolean imageUpdate(Image img,int infoflags,int x,int y,int width,int height) { w=width; h=height; if(w!=-1&&h!=-1) { return false; }

66

return true; }

public ImageProducer filteredImage(Image source,double mult) { boolean success; w=source.getWidth(null); h=source.getHeight(null); oldpixels=new int[w*h]; newpixels=new int[w*h]; pg=new PixelGrabber(source.getSource(),0,0,w,h,oldpixels,0,w); try { success=pg.grabPixels(0); }catch(Exception e) { System.out.println("Error in grabbing"+e); } index=w+1; for(int y=1;y<h-1;y++) { calc3x3offsets(); for(int x=1;x<w-1;x++) { p00=oldpixels[i00]; p01=oldpixels[i01]; p02=oldpixels[i02]; p10=oldpixels[i10]; p11=oldpixels[i11]; p12=oldpixels[i12]; p20=oldpixels[i20]; p21=oldpixels[i21]; p22=oldpixels[i22];

int newRed=applyWeights(16,mult); int newGreen=applyWeights(8,mult); int newBlue=applyWeights(0,mult);

67

newpixels[index++]=255<<24|newRed|newGreen|newBlue;

i00++; i01++; i02++; i10++; i11++; i12++; i20++; i21++; i22++; } index+=2; } mis=new MemoryImageSource(w,h,newpixels,0,w); return mis; }

final void calc3x3offsets() { i00=index-w-1; i01=i00+1; i02=i00+2; i10=index-1; i11=index; i12=index+1; i20=index+w-1; i21=i20+1; i22=i20+2; }

final int applyWeights(int shift,double multfactor) { double total=0; total+=((p00>>shift)&0xFF)*w00; total+=((p01>>shift)&0xFF)*w01;

68

total+=((p02>>shift)&0xFF)*w02; total+=((p10>>shift)&0xFF)*w10; total+=((p11>>shift)&0xFF)*w11; total+=((p12>>shift)&0xFF)*w12; total+=((p20>>shift)&0xFF)*w20; total+=((p21>>shift)&0xFF)*w21; total+=((p22>>shift)&0xFF)*w22;

total=total*multfactor; if(total>255) total=255; if(total<0) total=0; return((int)total)<<shift; } }

69

// BLURIMAGE.JAVA

import java.applet.*; import java.awt.*; import java.awt.image.*; /*<applet code=Blurimage.class width=300 height=400> <param name=img value=c:/Balaji.gif></applet>*/

public class Blurimage extends Applet { Image img; int cell[]; int cell1[]; int iw,ih; int tw,th; int rs,gs,bs,r,g,b,rgb; public void init() { try { img=getImage(getDocumentBase(),getParameter("img")); MediaTracker t=new MediaTracker(this); t.addImage(img,0); t.waitForID(0); iw=img.getWidth(null); ih=img.getHeight(null); cell=new int[iw*ih]; cell1=new int[iw*ih]; /*try { PixelGrabber pg=new PixelGrabber(img,0,0,iw,ih,cell,0,iw); pg.grabPixels(); }catch(InterruptedException e){};*/ for(int y=1;y<ih;y++) { for(int x=1;x<iw;x++)

70

{ rs=0; gs=0; bs=0; for(int k=-1;k<=1;k++) { for(int j=-1;j<=1;j++) { rgb=cell[(y*k)*iw+x+j]; r=(rgb >> 16)& 0xff; g=(rgb >> 8) & 0xff; b=rgb & 0xff;

rs+=r; gs+=g; bs+=b;

rs/=9; gs/=9; bs/=9; cell1[y*iw+x]=(0xff000000 | rs << 16 | gs << 8 | bs); } } } } img=createImage(new MemoryImageSource(iw,ih,cell1,0,iw)); repaint(); }catch(Exception e){} }}

71

// ERRDILOG.JAVA

import java.awt.*; import java.awt.event.*;

class errdialog extends Dialog implements ActionListener { Label m1,a1; Button ok; public errdialog(Frame p,String title,boolean m) { super(p,title,m); Panel p1=new Panel(); Panel p2=new Panel();

setLayout(new BorderLayout()); m1=new Label("Values should be with in 0-255"); ok=new Button("Ok"); p1.add(m1); p2.add(ok);

add("North",p1); add("Center",p2);

ok.addActionListener(this); }

public void actionPerformed(ActionEvent ae) { String s=ae.getActionCommand(); if(ae.getSource() instanceof Button) { if(s.equals("Ok")) dispose(); } }

72

} //MAIN FRAME import java.awt.*; import java.awt.event.*;

abstract class myframe extends Frame implements ActionListener,WindowListener { MenuItem op,cls; MenuItem org,neg,ls,lp,hp,hb,hist,thr,gs,cs,rot,heq;

public myframe() { super("Image Enhancement"); MenuBar mb1=new MenuBar(); setMenuBar(mb1); Menu file=new Menu("File"); Menu image=new Menu("Image");

mb1.add(file); mb1.add(image);

file.add(op=new MenuItem("Open")); file.add(cls=new MenuItem("Exit"));

image.add(org=new MenuItem("Original Image")); image.add(neg=new MenuItem("Negate")); image.add(gs=new MenuItem("Grey Scale")); image.add(rot=new MenuItem("Rotate")); image.add(ls=new MenuItem("Level Slicing")); image.add(cs=new MenuItem("Constrast Stretching")); image.add(thr=new MenuItem("Thresholding")); image.add(lp=new MenuItem("Low Pass Filter")); image.add(hp=new MenuItem("High Pass Filter")); image.add(hb=new MenuItem("High Boost Filter")); image.add(hist=new MenuItem("Histogram")); image.add(heq=new MenuItem("Histogram Equalizing"));

73

//image.add(eng=new MenuItem("Enlarge"));

op.addActionListener(this); cls.addActionListener(this);

org.addActionListener(this); neg.addActionListener(this); gs.addActionListener(this); rot.addActionListener(this); ls.addActionListener(this); cs.addActionListener(this); thr.addActionListener(this); lp.addActionListener(this); hp.addActionListener(this); hb.addActionListener(this); hist.addActionListener(this); heq.addActionListener(this); //eng.addActionListener(this); addWindowListener(this); }

public void actionPerformed(ActionEvent ae) { String s=ae.getActionCommand();

if(ae.getSource() instanceof MenuItem) { if(s.equals("Exit")) System.exit(0);

else

if(s.equals("Open")) open();

else

74

if(s.equals("Original Image")) org();

else

if(s.equals("Negate")) neg();

else

if(s.equals("Grey Scale")) gs();

else

if(s.equals("Rotate")) rotate();

else

if(s.equals("Level Slicing")) ls();

else

if(s.equals("Low Pass Filter")) lp();

else

if(s.equals("High Pass Filter")) hp();

else

if(s.equals("High Boost Filter"))

75

hb();

else

if(s.equals("Histogram")) hist();

else

if(s.equals("Thresholding")) thr();

else

if(s.equals("Constrast Stretching")) cs();

else

if(s.equals("Histogram Equalizing")) heq();

} }

public void windowClosed(WindowEvent we) {} public void windowDeiconified(WindowEvent we) {} public void windowIconified(WindowEvent we) {} public void windowActivated(WindowEvent we) {} public void windowDeactivated(WindowEvent we) {} public void windowOpened(WindowEvent we) {}

76

public void windowClosing(WindowEvent we) { dispose(); System.exit(0); }

abstract void open(); abstract void org(); abstract void neg(); abstract void gs(); abstract void ls(); abstract void cs(); abstract void lp(); abstract void hp(); abstract void hb(); abstract void hist(); abstract void thr(); abstract void heq(); abstract void rotate(); //abstract void eng(); }

77

// MENUES

import java.awt.*; import java.awt.event.*;

class mydialog extends Dialog implements ActionListener { TextField thr,ls1,ls2,ls3,ls4; Label m1,a1; Button ok,can; String val1=null,val2=null,val3=null,val4=null; int t; public mydialog(Frame p,String title,boolean m,int i) { super(p,title,m); t=i; Panel p1=new Panel(); Panel p2=new Panel(); Panel p3=new Panel();

setLayout(new BorderLayout()); ok=new Button("Ok"); can=new Button("Cancel");

p2.setLayout(new FlowLayout()); p3.setLayout(new FlowLayout()); //MODIFIED p3.add(ok); p3.add(can);

if(i==1) a1=new Label("Enter Min and Max values of Output:"); else if(i==2) a1=new Label("Enter range of output Grey levels:");

78

if((i==1)||(i==2)) { m1=new Label("Enter range of Grey levels to be changed:");

ls1=new TextField(4);

ls1.setText("100"); ls2=new TextField(4);

ls2.setText("200"); ls3=new TextField(4);

ls3.setText("0"); ls4=new TextField(4);

ls4.setText("255");

p1.add(m1); p2.add(ls1); p2.add(ls2); p2.add(a1); p2.add(ls3); p2.add(ls4); } else if(i==0) { m1=new Label("Enter Threshold value(0-255):"); ls1=new TextField(4); p1.add(m1); p2.add(ls1); }

add("North",p1); add("Center",p2); add("South",p3);

79

ok.addActionListener(this); can.addActionListener(this); }

public void actionPerformed(ActionEvent ae) { String s=ae.getActionCommand();

if(ae.getSource() instanceof Button) { if(s.equals("Ok")) { val1=ls1.getText(); if((t==1)||(t==2)) { val2=ls2.getText(); val3=ls3.getText(); val4=ls4.getText(); } dispose(); } else if(s.equals("Cancel")) dispose(); } }

80

CONCLUSION

The Image Enhancement Techniques we have implemented are very successful in providing users with the image information he needs, separating all other unrelated information. The enhancement include removing the noise from the images, enhancing the contrast, highlighting the required areas and many others.

81

10. BIBLIOGRAPHY
1. Digital Image Processing Rafael C. Gonzalez Richard E. Woods - Pearson Education. 2. Applied Digital Image Processing G.W. Awlock R. Thomas - McGraw Hill. 3. Introductory Computer Vision & Image Processing - McGraw Hill 4. Image Processing Analysis & Machine Design M. Sonka - Thomson Learning. 5. Roger S. Pressman Software Engineering Tata Mc Graw Hill,2000

82

Vous aimerez peut-être aussi