Vous êtes sur la page 1sur 23

TUTORIAL

BASIC DIGITAL IMAGE PROCESSING


(USING ENVI 4.0 SOFTWARE)

Prepared by: Muhammad Kamal

BASIC REMOTE SENSING LABORATORY CARTOGRAPHY AND REMOTE SENSING PROGRAM FACULTY OF GEOGRAPHY GADJAH MADA UNIVERSITY YOGYAKARTA 2008

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

TABLE OF CONTENTS MODULE 1 DISPLAYING IMAGES, PIXEL VALUE EXTRACTION, AND COLOR COMPOSITE CONCEPT Data conversion Displaying images Link display Pixel value extraction Color composite images Spectral pattern recognition using scatter plot 2 2 2 3 3 4 5 7 7 8 11 11 12 14 14 15 16 16 18 18 18 20 21 22

MODULE 2 RADIOMETRIC AND GEOMETRIC CORRECTION Radiometric correction Geometric correction

MODULE 3 IMAGE ENHANCEMENT AND SPATIAL FILTERING Image enhancement Spatial filtering

MODULE 4 IMAGE TRANSFORMATION Image fusion Band ratios Vegetation index Kauth-Thomas transformation (Tasseled Cap)

MODULE 5 MULTI-SPECTRAL CLASSIFICATION Unsupervised Supervised Post-classification operation Layout

REFERENCES

Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

MODULE 1 DISPLAYING IMAGES, PIXEL VALUE EXTRACTION, AND COLOR COMPOSITE CONCEPT A. DATA CONVERSION The first step towards digital image processing sequence is converting image dataset to softwares defined format. In this tutorial the image dataset used is Landsat 7 ETM+, with the data description as follows: 1. 2. 3. Image Acquisition Area Dimension Band Format : Landsat 7 ETM+ : July 2002 : Semarang and surrounding area : 700 x 1000 pixel : 6 bands (ETM1, ETM2, ETM3, ETM4, ETM5 and ETM7) : *.lan (ERDAS 7.5)

Run ENVI 4.0, Start > All Programs > RSI ENVI 4.0 > ENVI Click File > Open External File > IP Software > ERDAS 7.5 (.lan) On the dialog Enter ERDAS (.lan) Filenames, select the designated file (smg_raw.lan), and click Open to open the file. Standard file type for ENVI is BSQ, whilst on *.lan format is BIL. Thus it is necessary to convert the file type for further processing.

4.

The Available Bands List window appears. There are 6 bands on the list, you should pay attention that band 6 is not the actual thermal band for Landsat, it is a middle infrared band (band 7 in Landsat dataset). The thermal band is not used in this MODULE because it has different spatial resolution.

B. DISPLAYING IMAGES The next step is displaying images on the computer screen to investigate the image coverage both for the geographical objects distribution recognition or image quality assessment. The first way to display digital image is using Gray Scale mode or according to the image grey level, which represents objects spectral reflectance intensity in a certain band (single band). 1. 2. 3. On the Available Bands List, click radio button Gray Scale Select a band to be displayed Click Load Band, and 3 display windows appear, where : 4. Scroll window: displays all image extent and acts as image navigator, Image window: detailed view of the Scroll window, it consists of some image information menu and simple image processing, and Zoom window: the zooming function of Image window up to pixel size. Explore all image extent. Drag the red box either on the Scroll or Image window. On the Zoom window, the zoomin and zoom-out function is applicable by clicking + or mark on the down left side of the Zoom window. The exaggeration value will appear on the Zoom bar. 5. Display the rest of Landsat 7 ETM+ bands and evaluate their difference.

Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

6. 7. 8.

If you want to display another image band on the same display window, click the desired band on the list then click Load Band. If you want to display another band on a new display window, click Display #.... button (on the right side of Load Band button) > New Display, then a new blank display window will appear. Select the desired band on the list, and click Load Band. Make sure that the active display window is Display #2. You can display more display window by doing the same steps.

C. LINK DISPLAY One of the advantages of ENVI software is its ability to make a link between image bands windows. The linkage is determined by pixel position or geographic coordinates. 1. 2. 3. 4. 5. 6. 7. 8. Display 2 image windows containing different band, set up the windows position for your convenience. On a display window, click Tools > Link > Link Displays menu. On the Link Displays window, fill out Display #1 Yes, Display #2 Yes, Link Size/Position choose one display, Dynamic Overlay on, Transparency 0, click OK. The display windows is corresponding each other. Left click on the mouse button and hold to display the different objects spectral response for the different image band. Do the same procedure for all band variations. You can use 3 or more display window at a time. If more than 2 display windows used, you can manage the active Display # on the Link Displays window. To remove the link, click Tools > Link > Unlink Display on the Image window.

D. PIXEL VALUE EXTRACTION 1. 2. 3. 4. Identify the spectral response difference for water, bare land, dense vegetation, and industrial rooftop. Locate those target features on each individual band grey-scale image. The target features position should be the same for each object on each band (use the pixel location coordinate as guidance). To get the pixel location and its value, click Tools > Cursor Location/ Value. The Cursor Location/ Value window appears and the position and pixel value will varied according to the cursor location. If 2 display windows are linked, the cursor position and the corresponding pixel values will appear as follow:

Pixel position Pixel value of band #1 Pixel value of band #2

TASK 1 (attach into the report): 1. 2. Examine at least 9 pixels for each object on different band. Record the pixel coordinates, pixel values, and the mean pixel value for each object on a certain band. Fill out the pixel value identification table below.

Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

Pixel value identification table Band 1 2 3 4 5 7 Center coordinate NPwtr rNPwtr NPbl rNPbl NPdv 2,1,2,3,1,2,2,3,2 rNPdv 2 NPind rNPind

Plot the mean pixel value from the table into the graph, where x axis is the wavelength and y axis is the pixel value. Give the different mark (or color) for different object. Examine the spectral signature pattern and compare to the standard spectral reflectance (Picture 2). What can you conclude from the graph? Note: Image has not been radiometrically corrected; some noises apparently influence the pixel value. E. COLOR COMPOSITE IMAGES Color composite images are used to emphasize each bands information at one display, so it is easier to visually interpret the image. This image is a combination of three bands, which assigned to red, green, and blue color guns on each band. The colors appeared on composite image are combination of objects brightness level on each band used. A standard color composite image is mimicking the appearance of near infrared aerial photograph. On Landsat this is equal to putting ETM4 (near infrared) into red color gun, ETM3 (red) into green color gun, and ETM2 (green) into blue color gun. This standard image composition is often called by standard false color composite. The use of different image composition is strongly depending on the application or object being investigated. 1. 2. 3. 4. 5. On the Available Bands List, select radio button RGB, Selected Band window now consists of 3 bands along with the color gun order (RGB). For the first step, create a false color composite image. Put band 4, 3, and 2 respectively to the corresponding color guns by clicking image band on the list. Check whether the input bands are on the correct color gun or not. Click Load RGB to display the image on the display window. Check the appearance on the resulted image. Record the colors for the 4 targeted objects, find their location on the pixel value identification table. Identify its pixel value for all composing bands by using procedure D3. Because it is a composite image, the pixel values appeared is for the corresponding image bands. 6. 7. 8. Pixel value of 3 bands

Create a new display window, and create a new image composition of R: ETM3, G: ETM2, and B: ETM1, check the resulted color. Link the second composite image to the 432 composite, investigate and record the colors as well as the pixel values for the 4 targeted objects in composite 321. Try to create another image composition (452, 457, 352, etc). Choose one image composition that visualizing objects the best. Investigate and record the colors as well as the pixel values for the 4 targeted objects.

Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

TASK 2 (attach into the report): 1. 2. What is true color composite, and how to create it? What is its different compare to the false color composite? According your pixel value record for the 3 different image compositions: Explain why dense vegetation has a very strong red color in the 432 image composition, whilst it has dark green color in the 321 image composition? Identify the color of this object on your selected image composition, and explain why it appears that way. 3. Compare the 3 image compositions and make a table of objects recognition level for the 4 targeted objects. Assign the level of recognition (very easy, easy, medium, hard, and very hard) for each object on each image composition. Make a conclusion for this comparison! 4. F. Explain the principle of creating image composition for soil recognition using Landsat image!

SPECTRAL PATTERN RECOGNITION USING SCATTER PLOT Scatter plot draws the relationship of spectral reflectance between two different bands. The form of relationship is determined by the clustering of pixel values. The scatter plot is a handy tool for objects recognition related to their spectral reflectance. OBJECT SAMPLING Before creating a scatter plot, it is necessary to pick some object samples as follows: 1. 2. 3. Display the image (single band or composite). On the image display window click Overlay > Region of Interest. On the #1 ROI Tool, select radio button Window Zoom. Click ROI_Type > Polygon. Click Region #1 (Red) 0 points, and then click Edit. Change the name and color (if necessary), for example blue for water. Click OK if done. 4. 5. Locate the cursor to the Scroll or Image window and direct the red box to the water feature, make sure the position is fixed. Move cursor to the Zoom window and zoom so that pixel pattern is clearly visible. Decide the homogeneous pixel group for water. Pick the sample by creating a polygon surrounding the targeted sample, right click to close polygon, and right click one more to draw the color. 6. Do the same procedure for the other objects. Save ROI, click File > Save ROIs, click Select All Items, navigate to the save-in directory and give the ROI a name. DISPLAYING SCATTER PLOT 1. 2. 3. 4. 5. On the Image window, click File > Preferences, Set Image Window Xsize = 700 and Ysize = 1000, click OK. This is to make sure the whole image extent will be displayed on the scatter plot. On Image menu click Tools > 2-D Scatter Plots, select the band for x and y axis, click OK. The scatter plot appears. On the Scatter Plot window click File > Import ROIs, click Select All Items, OK. The ROI color will appear both on the image and the scatter plot. Check the objects clustering on the scatter plot. Try to use the other band for x and y axis. On the scatter plot window click Options > Change Bands, select the desired bands. Check the objects clustering on the scatter plot. You can use pixel dance function by clicking and hold cursor on image, and move the cursor to entire image. The corresponding pixel value on the scatter plot will locate the specified spectral object.

Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

6.

Save one of the scatter plots along with the pixel value clustering pattern, give a notation for their spectral clustering. Make an analysis for the objects clustering!

Dense vegetation Bare land Rooftop

Water

Picture 1. Spectral clustering of water, bare land, dense vegetation, and rooftop for band 3 vs 4.

Reflectance (%)

Vegetation Soil

Turbid water

Wavelength (m) Picture 2. Spectral reflectance of turbid water, soil, and healthy vegetation (Ford, 1979 in Sutanto, 1992)

Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

MODULE 2 RADIOMETRIC AND GEOMETRIC CORRECTION A. RADIOMETRIC CORRECTION Pixel value on remote sensing image contains spectral bit-coding information of Earth surface features. This information recorded by detector in the form of spectral radiance (miliWatt cm-2 sr-1 m-1). Theoretically, in an ideal remote sensing system, the spectral radiance recorded by the detector has the same value as the spectral reflectance of Earth surface. However, the spectral reflectance at the visible and some NIR spectrum (0,36 about 0,9 m) contains bias due to atmospheric refraction, scattering, and absorption, especially caused by aerosols, water vapor, and dust materials. Therefore, it is necessary to correct their spectral reflectance error and turning back to the correct spectral reflectance. There are 2 types of radiometric/atmospheric correction: Absolute atmospheric correction is aimed to turn the digital brightness values recorded by the detector into scaled surface reflectance values. This approach requires an atmospheric radiative transfer model (Modtran, 6-S, ACORN, ATREM, FLAASH, ATCOR) and atmospheric parameters at the time of acquisition. Relative atmospheric correction is used to normalize the intensities among the different bands within a single-date remotely sensed image and to normalize the intensities of bands of remote sensor data in multi dates of imagery to a standard scene selected by the analyst. The methods used in this approach are histogram adjustment or dark pixel subtraction, regression adjustment, scatter plot method, shadow calibration, and pseudo-invariant features. In this tutorial the relative atmospheric correction will be applied using histogram adjustment or dark pixel subtraction method as it is the simplest and widely applied method. The atmospheric effects correction algorithm is defined as follow: BVcorrection = BVoriginal - bias Atmospheric scattering causes high reflectance at visible spectrum (0.4 0.7 m) and has little effect at IR spectrum (>0.7 m). This method aimed to shift the minimum value of reflectance close to 0 so that the effects of atmospheric scattering will somewhat minimized. See Jensen (2005): Chapter 6 ER Principles and Radiometric Correction for the details. BANDS MINIMUM AND MAXIMUM VALUE IDENTIFICATION 1. 2. 3. Open the image Calculate the statistics, on the menu bar click Basic Tools > Statistics > Compute Statistics, the Calculate Statistics Input File window appears. Select the desired image, fill out the condition as follow: 4. 5. 6. 7. 8. Stats Subset : Full Scene Spectral Subset : 6/6 Bands

Click OK, and the Calculate Statistics Parameters window appears. Activate the check on Text Report, Min/Max/Mean Plot, Calculate Histogram Statistic, Histogram Plots, and the Histogram plots per window = 1. Enter the name and the directory of the statistic file (e.g. radiometrik.sta.) Activate the Report for Screen and File, locate the save-in directory and name it as smg_minmax.txt. Click OK, the image statistic report appears, as well as their histogram, and the min-max of PV.

Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

9.

Identify the minimum and maximum pixel value each band (located at smg_minmax.txt).

10. Choose the band that will be corrected and display its histogram. To do that, right click on the histogram plot > Plot Key. 11. Save the uncorrected bands histogram. On the histogram window click File > Save Plot As > Image File. Output File Type: JPEG and locate the save-in directory, click OK (these image uncorrected histograms will be compared to the corrected histograms later on). RADIOMETRIC CORRECTION PROCESS 1. 2. On the menu click Basic Tools > Band Math, the Band Math window appears. On the text box Enter an expression, type Bandx bias (for example b1 62, where b1 is the input band, and 62 is its minimum value), then click Add to List, click OK. 3. Enter the defined band, save output as file, locate the save-in directory and name it as smg_rx (r is radiometric and x is the band number). 4. Do the same procedure for the other bands. Even if you find the band has the minimum value of 0, you still have to follow this procedure to create a separate band file. 5. 6. 7. 8. Display an uncorrected image and its corrected image on another display window. Link both images and examine the brightness different between them. Check the pixel value and evaluate whether the corrected image has lower pixel value of bias compare to the uncorrected image. Calculate the corrected image statistics and display their histograms. Compare the uncorrected and corrected histograms per band. TASK 3 (attach into the report): 1. Using the radiometrically corrected image, plot the mean pixel value of the same objects as in Task 1 into the graph (spectral plot), where x axis is the wavelength and y axis is the pixel value. Give the different mark (or color) for different object. Examine the spectral signature pattern and compare to the standard spectral reflectance (Picture 2) and the previously created spectral signature (Task 1). Show the picture of them on the report. What can you conclude from these graphs? B. GEOMETRIC CORRECTION Geometric correction is intended to place the remotely sensed images in their proper planimetric (map) location so they can be associated with other spatial data. According to Jensen (2005), there are 2 steps in radiometric correction; spatial interpolation and intensity interpolation. Spatial interpolation concerning with the geometric relation between pixel location and map or Earth surface. This process requires some Ground Control Point (GCP) which can be obtained from corrected image, map or GPS measurement.

Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

There are some polynomial transformation used for coordinate/geometric interpolation, each of these transformation provides different degree of accuracy (Jensen, 1996); Affine transformation, requires minimum of 4 GCPs. This transformation is applied well for relatively flat topograhic area. Second order transformation, requires minimum of 6 GCPs (or 12 parameters), with higher degree of accuracy compare to the previous one. Third order transformation, requires minimum of 10 GCPs (20 parameters), and applicable for areas with higher topographic variations.

Picture 3. Resampling process of DN from original image (X,Y) to the corrected image (X,Y) (Jensen, 2005). Intensity interpolation performed by resampling process to determine the pixel value of geometrically rectified image. There are three methods of brightness value interpolation; Nearest neighbor interpolation. The new pixel value is specified by the closest pixel value of uncorrected image. Bilinear interpolation. It assigns the new pixel alue by interpolating brightness value of four nearest pixel value of the uncorrected image. Cubic-convolution interpolation, assign the new pixel value in much the same manner as bilinear interpolation, except the weighted values of 16 input pixels surounding the desired location. GROUND CONTROL POINTS COLLECTION 1. 2. 3. 4. 5. Open the radiometrically corrected image (in RGB composition is better). On the menu bar, click Map > Registration > Select GCPs : Image to Map On the Image to Map Registration window define the parameter; coordinate system UTM, datum WGS 84, unit meter, zone 49 S, click OK. Have a quick look at the image and map. Select at least 15 GCP candidates, they have to have the same object and location on both image and map. On the GCP Selection window, enter the map coordinate of the first GCP on the empty box, check the easting and northing position. 6. To put this defined GCP on the image, locate the cursors cross hair on the corresponding location as the map (use zoom for higer accuracy), if you sure with the location click Add Point, so you have the first GCP.

Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

7. 8.

Continue the same procedure for the other GCPs. If you have entered 4 GCPs the RMS error value appears. To show the GCP you have collected, click Show List, to reduce the RMS error you can set the GCP with higher RMS error to Off.

9.

If the amount of GCP is sufficient for the requirement and it has small RMS error, you can save the GCPs. On the GCP Selection window, click File > Save GCPs w/ map coords. Locate the directory and give it a name.

RECTIFICATION PROCESS 1. 2. On the GCP Selection window, click Option > Warp File, define the file you are going to rectify ( for example smg_r1), click OK. On the Registration Parameters window, set up the spatial interpolation parameters, resampling method, background (0=black, 255=white), and the output file. Locate the save-in directory and name it as smg_rgx (rg = radiometric and geometric, x = image band number). Click OK to process. 3. 4. 5. 6. 7. In the Available Bands List the rectified image appears along with its Map Info which contains projection and coordinate information of the image. Display the rectified image on the new display window, look at the difference compare to unrectified image, and chech its coordinate value. To rectify the other bands, on the menu bar click Map > Registration > Warp from GCPs : Image to Map Open your GCP file. Check the Image to Map Registration. Select the desired image file. Define the Registration Parameters and output file, click OK for process. Do these steps for all image bands. TASK 4 (attach into the report): 1. 2. Error source of a remotely sensed image are divided as systematic and non-systematic error. Explain those sources of errors and how they exist. The resampling process may use nearest neighbor, bilinear, or cubic-convolution method. Suppose you are going to use the rectified image for spectral-based analysis such as correlating spectral reflectance of image objects and their counterparts on the Earth surface, which method do you intend to use for resampling process and why?

Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

10

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

MODULE 3 IMAGE ENHANCEMENT AND SPATIAL FILTERING A. IMAGE ENHANCEMENT Image enhancement algorithms are applied to remotely sensed data to improve the appearance of an image for human visual analysis or occasionally for sub-sequent machine analysis. There are two operations for this process; point operations and local operations. Point operations modify the brightness values of each pixel in an image dataset independent of the characteristics of neighboring pixels. Local operations modify the value of each pixel in the context of the brightness values of the pixels surrounding it. Therefore, it is not recommended to use the enhanced images for further pixel-based analyses. Two commonly applied image enhancement algorithms are contrast stretching and histogram equalization. CONTRAST STRETCHING There are three techniques used in this method according to the range of the pixel value in an image dataset (Danoedoro, 1996; Jensen, 2005): Pixel value multiplication. For example, if an image which has pixel value range of 0 25 is multiplied by 3, the range will be stretched to 0 75. The stretched pixel value range will result in a more contrast image. Maximum-minimum contrast stretch. For example, if an image with the pixel value range of 0 25 will be stretched into the range of 0 255, the transformation equation will be shown as follow: BVoutput = 1. 2. 3.

BVinput - BVmin BVmax - BVmin

* 255

Contrast shrinking, image with the wide pixel value range is shrank into smaller range. Check the image pixel value range, the contrast stretching method will be used is the max-min contrast stretch by cutting the images histograms tails. Display a band of the image. On the display image window, click Enhance, the default stretching type sub-menu will appear and you can choose the method. For the advance user, select Interactive Stretching so the bands histogram window appears.

4. 5.

On the histogram window select Histogram_Source > Band, and Stretch_Type > Linear. On the Stretch boxes you can type the cut-off and saturation value, then click Enter so that the Stretch Bar is shifted according to the entered values. You can also change the cut-off and saturation value by dragging the Stretch Bar. Notice the output histogram.

6. 7.

Click Apply to execute the contrast stretching to the image. To save the stretched image, on the display window click File > Save Image As > Image File, fill out the specifications and save to the desired directory.

Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

11

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

8. 9.

Create a new display window, display the stretched image and the original image of the same band. Use link function to compare those images. You can also do the contrast stretching procedure for composite image. The difference is the process is applied on each color gun by clicking their radio button.

HISTOGRAM EQUALIZATION 1. 2. 3. 4. 5. Display the same band as above. Do the same procedure, except on the histogram window select Histogram_Source > Band, and Stretch_Type > Equalization. Define the cut-off and saturation value, click Apply. Compare the result to the linearly contrast stretched image. You might explore the other contrast stretching methods, such as Piecewise Linear, Gaussian, or Square Root. Compare the results and identify to what extent their difference occurs. B. SPATIAL FILTERING Spatial filtering is a local operation in that pixel values in an original image are modified on the basis of the grey levels of neighboring pixels. This operation usually applied on an image dataset in order to extract or emphasize some important information and remove another unwanted noise. Technically, this operation using a moving window algorithm, and considering the neighboring pixel values (so that called by local operation). In the context of image enhancement, there are two basic filters, which are low-pass and high-pass filters. Low pass filters are designed to emphasize low frequency features (large-area changes in brightness) and deemphasize the high frequency components of an image (local detail). High pass filters do just the reverse. They emphasize the detailed high frequency components of an image and deemphasize the more general low frequency information. 1. 2. 3. Display a band of the image. There are two ways to perform filtering operation; using the display image window or from the main menu. First, on the display image window click Enhance > Filter, select the sharpen, smooth, or median filter. Automatically, the image will be filtered. Compare the results to the original image. 4. 5. 6. Second, on the main menu, click Filter > Convolutions and Morphology. On the Convolutions and Morphology Tool widow, click Convolutions > select filter type. Define the number of kernel and their values. Click Quick Apply to execute the filter on image.

Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

12

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

TASK 5 (attach into the report): Try to use the other filter types, record the filter used and the change on image. Analyze the application of these filters. 1. To create a customized filter for specific application, click Convolutions > User Defined, use the kernel size of 3x3, enter the kernel values as follows: 2 2 2 2. 2 4 2 2 2 2 1 -2 1 -2 5 -2 1 -2 1 -2 -1 0 -1 0 1 0 1 2 -1 -1 -1 -1 16 -1 -1 -1 -1

Click Quick Apply, and examine the results. Analyze the relationship between those numbers to the filtered image appearance.

Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

13

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

MODULE 4 IMAGE TRANSFORMATION A. IMAGE FUSION Image Fusion is the process of combining multiple image layers into a single composite image. It is commonly used to enhance the spatial resolution of multispectral datasets using higher spatial resolution panchromatic data or single band SAR data. To perform data fusion using ENVI, the files must either be georeferenced (in which case spatial resampling is performed on the fly), or if not georeferenced, cover the same geographic area, have the same pixel size, the same image size, and the same orientation. The image fusion workflow is presented as follow:

Resampling & Contrast Stretch

Landsat ETM Band 4 Landsat ETM Band 3 Landsat ETM Band 2 Landsat ETM Panchromatic

Red

Hue

Red

Green

RGB to HSI

Saturation

HSI to RGB

Green

Color Composite

Blue

Intensity

Blue

Contrast Stretch

Intensity

Picture 4. Image fusion of multi-spectral Landsat image to the panchromatic image. This process results in a multispectral Landsat with 15 m spatial resolution (Janssen (ed), 2000). See Jensen (2005): Chapter 5 Merging Remotely Sensed Data for the details. 1. 2. 3. Open file smg_raw.lan (multispectral image, 30m) and smgp_raw (panchromatic image, 15m), both images have not been georeferenced yet. Change the multi-spectral pixel size into the panchromatic size. Click Basic Tools > Resize Data (Spatial/Spectral), select multispectral file, OK. The Resize Data Parameters window appears, on the Output File Dimensions change xfac = 2, and yfac = 2. These numbers are multiplication factors from 700 x 1000 to 1400 x 2000 pixel dimension. Why the pixel dimensions change necessary? 4. 5. Define the resampling method and save the output file. The output file will automatically appear on the Available Bands List. Change an RGB composite into HSV. On the main menu click Transform > Color Transforms > RGB to HSV. Create an image composition from resized image and click OK. 6. 7. 8. Save the transformation output or leave it as memory. OK. Stretch panchromatic data to replace the intensity value of the composite image. Click Basic Tools > Stretch Data; enter the panchromatic data, OK. On the Data Stretching window, type on the Output Data Range min = 0 and max = 1.0. Save the stretching result as a file or memory, OK.

Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

14

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

9.

Substitution of intensity value with the panchromatic data. Click Transform > Color Transforms > HSV to RGB, on the box H and S enter the composite image Hue and Sat respectively from the previous HSV image, and on the box V enter the stretched panchromatic image. Click OK, and save the output as file or memory. The new image composition will appears on the Available Bands List. Pixel size comparison of the original image (left) and the fused image (right) at the same zoom level.

10. Click Load RGB to display the fused image. 11. Open new display window, display the same image

composition from the original smg_raw.lan. Compare those images. In practice, ENVI has a specific feature to produce fast and efficient image fusion operation. This feature requires both images have been georeferenced. The steps are as follows: 1. 2. 3. 4. 5. 6. 7. 8. 9. B. Open the georeferenced image and the file smgp. On the main menu click Transform > Image Sharpening > HSV. Enter the RGB input, OK. Enter the high resolution image file, OK. Select the resampling method and save the output as file or memory, OK. Load RGB. You can also try to use Brovey color normalized fusion. Open new display window, and display the same composite image as bove. Click Transform > Image Sharpening > Color Normalized (Brovey). Enter input window (Display #2), enter the high resolution image file, OK. Select the resampling method and save the output as file or memory. OK. Load RGB, and compare the result to HSV-based image fusion.

BAND RATIOS Band ratios are enhancements resulting from the division of DN values in one spectral band by the corresponding values in another band. A major advantage of this operation is that they convey the spectral or color characteristics of image values, regardless of variation in scene illumination conditions. Band ratios are often useful for discriminating subtle spectral variations in a scene that are masked by the brightness variations in images from individual spectral bands or in standard color composites. To enhance some spectral aspects resulted from band ratios, a Color-RatioComposite (CRC) could be employed. In this tutorial, some well-known Landsat band ratios will be applied: 1. 2. Band-ratio 5/7 to emphasize clay, carbonate soil and vegetation Band-ratio 3/1 to emphasize iron oxide Band-ratio 2/4 or 3/4 to emphasize vegetation, and Band-ratio 5/4 to emphasize vegetation as well. Open the radiometrically and geometrically corrected image. On the main menu click Transform > Band Ratios, enter a band as numerator and another as denominator.

Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

15

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

3. 4. 5. 6.

Click Enter Pair, and OK. Save the image as file or memory. Display the result. Do the same procedure for the rest of band-ratios. Create a Color-Ratio-Composite (CRC) image based on band-ratio images 5/7, 3/1, 2/4 (RGB), display the result and enhance with the default histogram equalization method on the display image window. Examine the features on the composite image. Clay or carbonate soils represented in magenta, iron oxide in green, and vegetation in red. You can develop your own color composition to emphasize other aspects.

C.

VEGETATION INDEX Some algorithms have been developed to emphasize the vegetation density on remotely-sensed images. In practice, vegetation indexes are mathematical transformation incorporating some image bands and result a new image representing the vegetation features. One of the most popular transformations in vegetation studies is Normalized Difference Vegetation Index (NDVI), which is a combination of band rationing, band subtraction and band addition techniques. The NDVI values indicate the amount of green vegetation in each pixel. High NDVI values represent dense green vegetation at the specific pixel location, and vice versa. The NDVI algorithm is formulated as follow:
NIR - Red NDVI = NIR + Red

The resulting value ranges from -1 to +1. NDVI PROCESS 1. 2. 3. 4. Open smg image file. On the main menu click Transform > NDVI (Vegetation Index), select smg. Locate the near infrared and red band, save as file or memory. OK. Display the image, and check the pixel values using cursor location/value.

DENSITY SLICE OF NDVI IMAGE 1. 2. 3. 4. On the display image window click Overlay > Density Slice. Enter the NDVI image. Create 5 brightness levels. On the density slice window, click Option > Set Number of Defaults Ranges, type 5, OK. On the density slice window, click Option > Apply Default Ranges. Click Edit Range to edit the range and color if necessary. Click Apply, and save the classified image. You can perform the other vegetation transformation such as RVI, TVI, SAVI, DVI, PVI, VIF, etc. Use Band Ratios or Band Math. D. KAUTH-THOMAS TRANSFORMATION (TASSELED CAP) Tasseled cap transformation (developed by Kauth and Thomas, 1976) is a linear transformation of Landsat MSS data which projects soil and vegetation information into a single plane on a multispectral space. The orthogonal transformation applied to the original data and resulting four new space dimension consists of the soil brightness index (SBI), the green vegetation index (GVI), the yellow stuff index (YVI), and non-such index (NSI) which related to the

Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

16

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

atmospheric effects. Crist and Cicone (1984) further studied the usage of this transformation for Landsat TM data, and resulting three new planes; Plane Brightness Greenness Wetness TM1 0,33183 -0,24717 0,13929 TM2 0,33121 -0,16263 0,22490 TM3 0,55177 -0,40639 0,40359 TM4 0,42514 0,85468 0,25178 TM5 0,48087 0,05493 -0,70133 TM7 0,25252 -0,11749 -0,45732

For Landsat TM data, tasseled cap vegetation index consists of: Brightness, Greenness, and Third. Brightness and Greenness are equivalent to the SBI and GVI of Landsat MSS, and the third plane correlated to the soil features, including soil moisture. For Landsat 7 ETM data, tasseled cap transformation resulting 6 outputs: Brightness, Greenness, Wetness, Fourth (Haze), Fifth, Sixth. Open website http://landcover.usgs.gov/pdf/tasseled.pdf for further explanation. 1. 2. 3. 4. 5. Open smg file. On the main menu click Transform > Tasseled Cap, enter the file, OK. On the Input File Type make sure Landsat 7 ETM is selected, save file, and click OK. Display each resulted images. Compare the greenness image to the NDVI image. Create a new composite image of R : brightness, G : greenness, and B : wetness. This composite image is an enhancement of dry soil, dense vegetation, and water or moisture soil. Analyze the resulted color. Dede and Carolita (1996) proposed an algorithm for Soil Moisture Index (Indeks Kelangasan Tanah) as follow: IKL = 1. 2.

Wetness Index + Vegetation Index Brightness Index

Try to create the IKL from tasseled cap images. Use Band math or Band Ratios. Use density slice to classify the IKL image.

Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

17

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

MODULE 5 MULTI-SPECTRAL CLASSIFICATION Digital image classification is a process of pixels grouping into certain classes. The process of classification consists of two stages. The first is the recognition of categories of real-world objects. In the context of remote sensing of the land surface these categories could include, for example, woodlands, water bodies, grassland and other land cover types, depending on the geographical scale and nature of the study. The second stage in the classification process is the labeling of the entities (normally pixels) to be classified. In digital image classification these labels are numerical, so that a pixel that is recognized as belonging to the class water may be given the label 1, woodland may be labeled 2, and so on. The process of image classification requires the user to perform the following steps: (i) determine a priori the number and nature of the categories in terms of which the land cover is to be described, and (ii) assign numerical labels to the pixels on the basis of their properties using a decision-making procedure, usually termed a classification rule or a decision rule. Sometimes these steps are called classification and identification (or labeling), respectively. There are three types of classification methods; they are unsupervised classification, supervised classification, and hybrid classification. In an unsupervised classification, the identities of land cover types to be specified as classes within a scene are not generally known a priori because ground reference information is lacking or surface features within the scene are not well defined. On the other hand, in a supervised classification, the identity and location of some of the land cover types are known a priori through a combination of fieldwork, interpretation, map analysis, and personal experience. While hybrid classification is the combination of unsupervised and supervised classification. See Jensen (2005): Chapter 9 Thematic Information Extraction: Pattern Recognition for the details. A. UNSUPERVISED CLASSIFICATION In the digital image classification, information captured from each pixel is the land cover. 1. 2. 3. 4. Open smg image (ENVI standard format). On the menu bar click Classification > Unsupervised > IsoData, select the multi-spectral image, OK. Enter the required parameters, put Maximum Iteration = 3, Minimum # Pixel in Class = 9. Save image as a file, and then click OK. Display the image and check the class created. To do so, on the image display window click Overlay > Annotation, on the Annotation window select Object > Map Key, click box Edit Map Key Items, and count the number of created classes. 5. 6. 7. Display the composite image on another window and compare to previously created image. Use link image if necessary. You can then analyze the classification result. Try to use K-Means method, click Classification > Unsupervised > K-Means. Use the same image and parameters and save the result as a file. Compare the results of two different classification methods.

B. SUPERVISED CLASSIFICATION In the supervised classification, training areas are to be defined prior to the classification process. Those training areas are selected based on the spectral reflectance pattern of object on each band. 1. 2. Display the most representative composite image. Select the training areas or samples (Region of Interest/ ROI) for each land cover object. Look at the sample extraction explained on MODULE 1 point F on how to select ROI.
Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

18

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

Things to remember when selecting training area are; The training area has to be homogeny, the amount of pixel + 100 pixels each class. The homogeneity of training area can be identified using the similar color of object on the composite image. Give the training area a name according to your interpretation and give a specific color. For practical reasons, a class of land cover can be differentiated into several sub-classes, for example vegetation1, vegetation2, and vegetation3 for vegetation class. You can combine those classes later on. Select the training areas as complete as possible from the image; all land cover classes should be sampled. Use the following spectral reflectance curve to assist in identifying objects.

Reflectance (%)

Vegetation Soil

Turbid water

Wavelength (m) Spectral reflectance of turbid water, soil, and healthy vegetation (Ford, 1979 in Sutanto, 1992) 3. Save the ROI, give a unique name so you can easily remember. Do not close the ROI window.

Computing ROI Separability The ROI Separability option computes the spectral separability between selected ROI pairs for a given input file. Both the Jeffries-Matusita and Transformed Divergence separability measures are reported. These values range from 0 to 2.0 and indicate how well the selected ROI pairs are statistically separate. Values greater than 1.9 indicate that the ROI pairs have good separability. For ROI pairs with lower separability values, you should attempt to improve the separability by editing the ROIs or by selecting new ROIs. For ROI pairs with very low separability values (less than 1), you might want to combine them into a single ROI. 4. 5. 6. 7. From the ROI Tool dialog menu bar, select Options > Compute ROI Separability. Select an input file and perform optional spectral subsetting, then click OK. The ROI Seperability Calculation dialog appears. In the dialog, select ROIs for the separability calculation and click OK. The separabilities are calculated and reported in a report dialog. Both the Jeffries-Matusita and Transformed Divergence values are reported for every ROI pair. The bottom of the report shows the ROI pair separability values listed from the least separable pair to the most separable. 8. To save the report to an ASCII file, select File > Save Text to ASCII.

Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

19

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

Performing Supervised Classification 9. On the main menu click Classification > Supervised > select one of available methods, try to select Parallelepiped. 10. Select the input file. If you have not closed the ROI window, the training areas will automatically appear on the Parallelepiped Parameter window. Click Select All Items. Save the output file and its rule, then click OK to execute the classification. 11. Display the classification result, if you find the black pixels it means that those pixels are unclassified or have not been classified yet into one of the ROIs. You can identify those pixels based on the color composite image using link facility, add more ROI classes if necessary. Then rerun the classification until the unclassified pixels are minimal. 12. Try to use other classification methods such as Minimum Distance, Mahalanobis Distance, and Maximum Likelihood). Then you can compare their results. C. POST-CLASSIFICATION OPERATION This step also termed as a cosmetic operation, which aimed to improve the cartographic appearance of the classified image. Classification results often contain small-isolated pixels. To remove those pixels, you can use majority filter. It is a logical rather than numerical filter since a classified image consists of labels rather than quantized counts. The simplest form of the majority filter involves the use of a filter window, usually measuring 3 rows by 3 columns, is centered on the pixel of interest. The number of pixels allocated to each of the k classes is counted. If the centre pixel is not a member of the majority class (containing five or more pixels within the window) it is given the label of the majority class. The effect of this algorithm is to smooth the classified image by weeding-out isolated pixels, which were initially given labels that were dissimilar to the labels assigned to the surrounding pixels. 1. 2. 3. 4. 5. 6. Select the best classification result, and display it. On the main menu click Classification > Post Classification > Majority/Minority Analysis. Enter the selected classification image file. Select All Items to select all classes, on the Analysis Method click radio button Majority, save as file majority1. Click OK to execute. Display the majority-filtered image and link it to the previously classified image. Examine their class distribution differences. If you think there are still too many isolated pixels, you may further applying majority filter for the majority1 file, and then name it majority2. You should do the majority filter wisely. Remember that Landsat 7 ETM has spatial resolution of 30m, or 1 pixel represents 30 x 30 m area on the ground!

Comparison of classified image (left) and the majority-filtered image (right). The class polygon is more compact.

Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

20

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

D. LAYOUT 1. 2. 3. Display the majority-filtered image. On the display image window click Overlay > Grid Lines, coordinate grid will appear along with the white background. Arrange the background so that the legend and other information can be put. On the Grid Line Parameters window, click Option > Set Display Borders. Enter 200 for top margin, 100 for bottom and left margin, 400 for right margin. 4. 5. 6. 7. 8. 9. Set map grid using Option > Edit Map Grid Attributes, and geographic grid using Option > Edit Geographic Grid Attributes. Click Apply to show the change. Do not close the Grid window. To add title, scale, orientation, legend, etc., click Overlay > Annotation. On the Annotation window click Object > Text, define Window: Scroll, text color, font type and size, then type the texts related to the map (i.e. map title, additional information, map creator, etc.). Left click on the scroll window, drag to the desired position, and then right click to place the text. Continue for the other information. If you need to edit the position of the fixed text click Object > Selection/Edit, then drag to the object you are going to edit. Place it into the desired position. Layout the image with highly cartographic quality. you have crated, not the image. 11. To save the image file, on the display image window click File > Save Image As > Image File, on the Output Type File choose JPEG, locate the save-in directory and the file name, click OK to save. 10. Do not forget to save the annotation, File > Save Annotation, and give it a name. This file stores the annotation

Layout example of classified image with 6 land cover classes.


Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

21

Muhammad Kamal : Basic Digital Image Processing Using ENVI 4.0 Software_____________________________________________

REFERENCES Campbell, James. B. 2002. Introduction to Remote Sensing (3rd edition). New York: The Guilford Press Dirgahayu, Dede dan Carolita, Ita, 1996, Aplikasi Inderaja Untuk Mendeteksi Sebaran Kelengasan Lahan Secara Kuantitatif, Majalah LAPAN edisi Januari no. 80 hal 8 18 th 1997 Janssen, L.L.F (ed.). 2000. Principles of Remote Sensing (An introductory textbook). The Netherlands: ITC Jensen, J.R. 2005. Introductory to Digital Image Processing ; a Remote Sensing Perspective. New Jersey: Prentice Hall Indrawati, Like. 2001, Karakteristik Pantulan Spektral Kandungan Kelembaban Tanah Permukaan pada Data Digital Multispektral Landsat TM di Sebagian Propinsi Daerah Istimewa Yogyakarta. S-1 Thesis. Yogyakarta: Faculty of Geography UGM Kamal, Muhammad. 2004, Kajian Kerentanan Banjir Menggunakan Data Digital Landsat ETM+ (Studi Kasus di Sebagian Lahan Rendah Kabupaten Demak dan Grobogan, Jawa Tengah). S-1 Thesis. Yogyakarta: Faculty of Geography UGM Mather, P.M. 1987. Computer Processing of Remotely Sensed Data. London: John Willey & Sons Danoedoro, Projo. 1996. Pengolahan Citra Digital; Teori dan Aplikasinya dalam Bidang Penginderaan Jauh. Yogyakarta: Faculty of Geography UGM Sutanto. 1992. Penginderaan Jauh Jilid I. Yogyakarta: Gadjah Mada University Press 2008

Basic Remote Sensing Laboratory, Faculty of Geography GMU_____________________________________________________

22

Vous aimerez peut-être aussi