Vous êtes sur la page 1sur 4

" A picture is worth thousand words"

A brief note on the national seminar on digital image processing by Dr. R KRISHNA MOORTHY, at Mount Zion Engineering College on 12th March 2011.

Obtaining a digital image:: An analog image is subjected to light. Depending on the surface properties, the light gets reflected. The reflected light will have an 'intensity value' or 'pixel' and on the basis of this value a digital image is obtained. Analog picture, which is continuous in nature is discretized and a digital image, which is discrete in nature is obtained.

Defining a digital image:: A collection of intensity values OR A collection of 2D array of intensity values. Mathematically, an image is defined as f(x,y), where x,y --> spatial coordinates and f--> intensity. Scanning an image:: 1. RASTER SCAN: Scanning in a regular pattern, from left to right. 2. RANDOM SCAN: Scanning in an irregular manner.

Steps in image processing:: 1. IMAGE ACQUISITION: It involves converting an analog picture to digital form. In electronics terms ADC. 2. IMAGE ENHANCEMENT: Enriching the quality of the picture. For eg. A satellite image is obtained with many limitations like atmospheric turbulence, national boundary etc. To overcome these limitations the digital image is enhanced. 3. IMAGE RESTORATION: The missing portions in the image is compensated with the aid of neighboring pixel values. Filling techniques are used to fill the missing portions.

4. IMAGE SEGMENTATION: This step involves parting the image into homogeneous regions with each region having some special properties. 5. IMAGE REPRESENTATION AND DESCRIPTION : To facilitate further processing, the image is given certain representation. The methods include tree structure, quad approach etc. The boundary of the object in image is identified by the abrupt change in pixel value. Chain code helps to identify the boundary. 6. IMAGE COMPRESSION: To make savings in storage the image is compressed.

Types of images:: 1. RGB 2. GRAY SCALE / MONOCHROME 3. BLACK AND WHITE

Image enhancement:: Purpose: To improve quality of image.

Use: Feature extraction, Image analysis, Visual information display

Two domains of enhancement: 1. Spatial domain: Here the operations are done on the original pixel values. The three basic gray level transformations are: a) Image negative: subtracting given pixel value from 255. b) Log transformation: Original pixel value is multiplied by some log value to make easy plotting like histogram plotting. c) Power Law transforms: Multiplying with some powers. 2. Frequency domain: Here the operations are on the transforms of the original pixel values.

The transformations can be a) Geometry like translation, scaling, rotation etc, i.e, by changing the spatial values. b) Unitary transform like FFT, DFT etc, i.e, a unique method of transform. The transform used in international files is DCT.

Image Compression:: Aim: Savings in storage, Reduction in transmission cost Compression is achieved by removing redundancy. The different types of redundancies are: a) Coding redundancy b)Inter-pixel redundancy c) Psycovisual redundancy Image compression is of two types: a) Lossless compression: The image after decompression will be same as that of the original image, i.e, information is preserved. But we can go for only 40-50% compression or low compression ratio. Eg. Huffman. b) Lossy compression: The image after decompression will be slightly different from original image ,i.e, information is not preserved. But we can have 95% compression or high compression ratio. Eg. JPEG.

Image compression techniques: a) Pixel: Run length, Huffman b)Prediction: DPCM, ADPCM, DM c) Transform: DCT, DWT, OPT d) Hybrid: JPEG, JPEG2000

Color Models:: Method to explain the properties/behavior of color within a content like texture, edge...... Defining color: Narrow frequency band within the electromagnetic spectrum. Properties of light: a)Dominant frequency: Hue/color. b)Luminance: Brightness/ radiant energy.

c)Saturation: Amount of white added.

Texture:: - A feature used to partition images into regions of interest and to classify those regions. -Provides information in the spatial arrangement of colors or intensities in an image. -Characterized by the spatial distribution of intensity levels in a neighborhood. -Repeating pattern of local variation in an image intensity. A texture can be defined in three approaches: a) Structural. b) Statistical. c)Modeling.

Vous aimerez peut-être aussi